Difference in performance (using BETWEEN or IN select_range)

Hi,
I need to improve the performance of a report program. The program is a copy of a standard program with some changes. In the custom program I have found a select query on a cluster pcl4.
The standard program selects on this cluster using a range table providing min and max values.
The custom program uses between clause passing the min and max values.
Please suggest which one of these would be better in performance.
Lokesh

hi
<b>USE BETWEEN , IT IS VERY IN THE POINT OF PERFORMNCE</b>
AND REFER THESE POINTS FOR THE FETURE USE REGARDING PERFORMNCE
Ways of Performance Tuning
1.     Selection Criteria
2.     Select Statements
•     Select Queries
•     SQL Interface
•     Aggregate Functions
•     For all Entries
Select Over more than one Internal table
Selection Criteria
1.     Restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code using CHECK statement. 
2.     Select with selection list.
Points # 1/2
SELECT * FROM SBOOK INTO SBOOK_WA.
  CHECK: SBOOK_WA-CARRID = 'LH' AND
         SBOOK_WA-CONNID = '0400'.
ENDSELECT.
The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list
SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
  WHERE SBOOK_WA-CARRID = 'LH' AND
              SBOOK_WA-CONNID = '0400'.
Select Statements   Select Queries
1.     Avoid nested selects
2.     Select all the records in a single shot using into table clause of select statement rather than to use Append statements.
3.     When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
4.     For testing existence , use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit. 
5.     Use Select Single if all primary key fields are supplied in the Where condition .
Point # 1
SELECT * FROM EKKO INTO EKKO_WA.
  SELECT * FROM EKAN INTO EKAN_WA
      WHERE EBELN = EKKO_WA-EBELN.
  ENDSELECT.
ENDSELECT.
The above code can be much more optimized by the code written below.
SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
    FROM EKKO AS P INNER JOIN EKAN AS F
      ON PEBELN = FEBELN.
Note: A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. One should therefore use nested SELECT loops  only if the selection in the outer loop contains very few lines or the outer loop is a SELECT SINGLE statement.
Point # 2
SELECT * FROM SBOOK INTO SBOOK_WA.
  CHECK: SBOOK_WA-CARRID = 'LH' AND
         SBOOK_WA-CONNID = '0400'.
ENDSELECT.
The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list and puts the data in one shot using into table
SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
  WHERE SBOOK_WA-CARRID = 'LH' AND
              SBOOK_WA-CONNID = '0400'.
Point # 3
To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields . In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
Point # 4
SELECT * FROM SBOOK INTO SBOOK_WA
  UP TO 1 ROWS
  WHERE CARRID = 'LH'.
ENDSELECT.
The above code is more optimized as compared to the code mentioned below for testing existence of a record.
SELECT * FROM SBOOK INTO SBOOK_WA
    WHERE CARRID = 'LH'.
  EXIT.
ENDSELECT.
Point # 5
If all primary key fields are supplied in the Where condition you can even use Select Single.
Select Single requires one communication with the database system, whereas Select-Endselect needs two.
Select Statements           contd..  SQL Interface
1.     Use column updates instead of single-row updates
to update your database tables.
2.     For all frequently used Select statements, try to use an index.
3.     Using buffered tables improves the performance considerably.
Point # 1
SELECT * FROM SFLIGHT INTO SFLIGHT_WA.
  SFLIGHT_WA-SEATSOCC =
    SFLIGHT_WA-SEATSOCC - 1.
  UPDATE SFLIGHT FROM SFLIGHT_WA.
ENDSELECT.
The above mentioned code can be more optimized by using the following code
UPDATE SFLIGHT
       SET SEATSOCC = SEATSOCC - 1.
Point # 2
SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
  WHERE CARRID = 'LH'
    AND CONNID = '0400'.
ENDSELECT.
The above mentioned code can be more optimized by using the following code
SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
  WHERE MANDT IN ( SELECT MANDT FROM T000 )
    AND CARRID = 'LH'
    AND CONNID = '0400'.
ENDSELECT.
Point # 3
Bypassing the buffer increases the network considerably
SELECT SINGLE * FROM T100 INTO T100_WA
  BYPASSING BUFFER
  WHERE     SPRSL = 'D'
        AND ARBGB = '00'
        AND MSGNR = '999'.
The above mentioned code can be more optimized by using the following code
SELECT SINGLE * FROM T100  INTO T100_WA
  WHERE     SPRSL = 'D'
        AND ARBGB = '00'
        AND MSGNR = '999'.
Select Statements       contd…           Aggregate Functions
•     If you want to find the maximum, minimum, sum and average value or the count of a database column, use a select list with aggregate functions instead of computing the aggregates yourself.
Some of the Aggregate functions allowed in SAP are  MAX, MIN, AVG, SUM, COUNT, COUNT( * )
Consider the following extract.
            Maxno = 0.
            Select * from zflight where airln = ‘LF’ and cntry = ‘IN’.
             Check zflight-fligh > maxno.
             Maxno = zflight-fligh.
            Endselect.
The  above mentioned code can be much more optimized by using the following code.
Select max( fligh ) from zflight into maxno where airln = ‘LF’ and cntry = ‘IN’.
Select Statements    contd…For All Entries
•     The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
     The plus
•     Large amount of data
•     Mixing processing and reading of data
•     Fast internal reprocessing of data
•     Fast
     The Minus
•     Difficult to program/understand
•     Memory could be critical (use FREE or PACKAGE size)
Points to be must considered FOR ALL ENTRIES
•     Check that data is present in the driver table
•     Sorting the driver table
•     Removing duplicates from the driver table
Consider the following piece of extract
Loop at int_cntry.
       Select single * from zfligh into int_fligh
where cntry = int_cntry-cntry.
Append int_fligh.
Endloop.
The above mentioned can be more optimized by using the following code.
Sort int_cntry by cntry.
Delete adjacent duplicates from int_cntry.
If NOT int_cntry[] is INITIAL.
            Select * from zfligh appending table int_fligh
            For all entries in int_cntry
            Where cntry = int_cntry-cntry.
Endif.
Select Statements    contd…  Select Over more than one Internal table
1.     Its better to use a views instead of nested Select statements.
2.     To read data from several logically connected tables use a join instead of nested Select statements. Joins are preferred only if all the primary key are available in WHERE clause for the tables that are joined. If the primary keys are not provided in join the Joining of tables itself takes time.
3.     Instead of using nested Select loops it is often better to use subqueries.
Point # 1
SELECT * FROM DD01L INTO DD01L_WA
  WHERE DOMNAME LIKE 'CHAR%'
        AND AS4LOCAL = 'A'.
  SELECT SINGLE * FROM DD01T INTO DD01T_WA
    WHERE   DOMNAME    = DD01L_WA-DOMNAME
        AND AS4LOCAL   = 'A'
        AND AS4VERS    = DD01L_WA-AS4VERS
        AND DDLANGUAGE = SY-LANGU.
ENDSELECT.
The above code can be more optimized by extracting all the data from view DD01V_WA
SELECT * FROM DD01V INTO  DD01V_WA
  WHERE DOMNAME LIKE 'CHAR%'
        AND DDLANGUAGE = SY-LANGU.
ENDSELECT
Point # 2
SELECT * FROM EKKO INTO EKKO_WA.
  SELECT * FROM EKAN INTO EKAN_WA
      WHERE EBELN = EKKO_WA-EBELN.
  ENDSELECT.
ENDSELECT.
The above code can be much more optimized by the code written below.
SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
    FROM EKKO AS P INNER JOIN EKAN AS F
      ON PEBELN = FEBELN.
Point # 3
SELECT * FROM SPFLI
  INTO TABLE T_SPFLI
  WHERE CITYFROM = 'FRANKFURT'
    AND CITYTO = 'NEW YORK'.
SELECT * FROM SFLIGHT AS F
    INTO SFLIGHT_WA
    FOR ALL ENTRIES IN T_SPFLI
    WHERE SEATSOCC < F~SEATSMAX
      AND CARRID = T_SPFLI-CARRID
      AND CONNID = T_SPFLI-CONNID
      AND FLDATE BETWEEN '19990101' AND '19990331'.
ENDSELECT.
The above mentioned code can be even more optimized by using subqueries instead of for all entries.
SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
    WHERE SEATSOCC < F~SEATSMAX
      AND EXISTS ( SELECT * FROM SPFLI
                     WHERE CARRID = F~CARRID
                       AND CONNID = F~CONNID
                       AND CITYFROM = 'FRANKFURT'
                       AND CITYTO = 'NEW YORK' )
      AND FLDATE BETWEEN '19990101' AND '19990331'.
ENDSELECT.
1.     Table operations should be done using explicit work areas rather than via header lines.
2.     Always try to use binary search instead of linear search. But don’t forget to sort your internal table before that.
3.     A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
4.     A binary search using secondary index takes considerably less time.
5.     LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
6.     Modifying selected components using “ MODIFY itab …TRANSPORTING f1 f2.. “ accelerates the task of updating  a line of an internal table.
Point # 2
READ TABLE ITAB INTO WA WITH KEY K = 'X‘ BINARY SEARCH.
IS MUCH FASTER THAN USING
READ TABLE ITAB INTO WA WITH KEY K = 'X'.
If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
Point # 3
READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
Point # 5
LOOP AT ITAB INTO WA WHERE K = 'X'.
ENDLOOP.
The above code is much faster than using
LOOP AT ITAB INTO WA.
  CHECK WA-K = 'X'.
ENDLOOP.
Point # 6
WA-DATE = SY-DATUM.
MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
The above code is more optimized as compared to
WA-DATE = SY-DATUM.
MODIFY ITAB FROM WA INDEX 1.
7.     Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
8.    If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
9.    "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to “ LOOP-APPEND-ENDLOOP.”
10.   “DELETE ADJACENT DUPLICATES“ accelerates the task of deleting duplicate entries considerably as compared to “ READ-LOOP-DELETE-ENDLOOP”.
11.   "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to “  DO -DELETE-ENDDO”.
Point # 7
Modifying selected components only makes the program faster as compared to Modifying all lines completely.
e.g,
LOOP AT ITAB ASSIGNING <WA>.
  I = SY-TABIX MOD 2.
  IF I = 0.
    <WA>-FLAG = 'X'.
  ENDIF.
ENDLOOP.
The above code works faster as compared to
LOOP AT ITAB INTO WA.
  I = SY-TABIX MOD 2.
  IF I = 0.
    WA-FLAG = 'X'.
    MODIFY ITAB FROM WA.
  ENDIF.
ENDLOOP.
Point # 8
LOOP AT ITAB1 INTO WA1.
  READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
  IF SY-SUBRC = 0.
    ADD: WA1-VAL1 TO WA2-VAL1,
         WA1-VAL2 TO WA2-VAL2.
    MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
  ELSE.
    INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
  ENDIF.
ENDLOOP.
The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
LOOP AT ITAB1 INTO WA.
  COLLECT WA INTO ITAB2.
ENDLOOP.
SORT ITAB2 BY K.
COLLECT, however, uses a hash algorithm and is therefore independent
of the number of entries (i.e. O(1)) .
Point # 9
APPEND LINES OF ITAB1 TO ITAB2.
This is more optimized as compared to
LOOP AT ITAB1 INTO WA.
  APPEND WA TO ITAB2.
ENDLOOP.
Point # 10
DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
This is much more optimized as compared to
READ TABLE ITAB INDEX 1 INTO PREV_LINE.
LOOP AT ITAB FROM 2 INTO WA.
  IF WA = PREV_LINE.
    DELETE ITAB.
  ELSE.
    PREV_LINE = WA.
  ENDIF.
ENDLOOP.
Point # 11
DELETE ITAB FROM 450 TO 550.
This is much more optimized as compared to
DO 101 TIMES.
  DELETE ITAB INDEX 450.
ENDDO.
12.   Copying internal tables by using “ITAB2[ ] = ITAB1[ ]” as compared to “LOOP-APPEND-ENDLOOP”.
13.   Specify the sort key as restrictively as possible to run the program faster.
Point # 12
ITAB2[] = ITAB1[].
This is much more optimized as compared to
REFRESH ITAB2.
LOOP AT ITAB1 INTO WA.
  APPEND WA TO ITAB2.
ENDLOOP.
Point # 13
“SORT ITAB BY K.” makes the program runs faster as compared to “SORT ITAB.”
Internal Tables         contd…
Hashed and Sorted tables
1.     For single read access hashed tables are more optimized as compared to sorted tables.
2.      For partial sequential access sorted tables are more optimized as compared to hashed tables
Hashed And Sorted Tables
Point # 1
Consider the following example where HTAB is a hashed table and STAB is a sorted table
DO 250 TIMES.
  N = 4 * SY-INDEX.
  READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
  IF SY-SUBRC = 0.
  ENDIF.
ENDDO.
This runs faster for single read access as compared to the following same code for sorted table
DO 250 TIMES.
  N = 4 * SY-INDEX.
  READ TABLE STAB INTO WA WITH TABLE KEY K = N.
  IF SY-SUBRC = 0.
  ENDIF.
ENDDO.
Point # 2
Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
LOOP AT STAB INTO WA WHERE K = SUBKEY.
ENDLOOP.
This runs faster as compared to
LOOP AT HTAB INTO WA WHERE K = SUBKEY.
ENDLOOP.
<b>REWARD IF USEFULL</b>

Similar Messages

  • Differences in "storage used" between iPad and iTunes

    I have an iPad Mini 2 and there a differences between the stated "storage used" between the iPad and iTunes.
    Looking in the general settings "Storage" it states I have used 113GB and available is 413MB.
    Looking on iTunes, when the iPad is connected, it states I have 67.20GB free.
    Which is correct and what causes the problem?
    How is it fixed if it is because the iPad has a fault?
    Many thanks.

    It is now a couple of weeks and noticed the discrepancy occurring again.  I did not want to go through the above process again because it took about 4 hours to get my iPad up, running again and normal.
    Again, there was a difference in memory by about 20GB.
    This time it tried something different.
    1) Went into General settings on iPad and managed memory.  From here I removed all films and TV episodes.  Now Video memory was zero.
    2) Switched off WiFi.  When I opened the Video app, I noticed a pile of films and TV episodes showing, even though they had the cloud icon.  They could even be watched.  I deleted these films/episodes.
    3) Viewed the iPad memory using iTunes on laptop and compared to iPad general settings usage (video memory negligible).  Both numbers pretty much spot on.
    My musings - I can only assume that the iPad does not properly delete films/episodes that I had previously watched.  It is evidently holding on to the video, even though I had deleted them and they were not showing in general settings.  Though it is a hassle, I at least now how to find where the "lost" memory is and how to reclaim it.

  • Can anyone tell me the difference in performance between CSS URL switching

    first
    Can anyone tell me the difference in performance between CSS URL switching and F5 Big-IP?
    second
    Can anyone tell me the difference in performance between CSS URL switching and alteon ?
    third what is best overally?
    i think alteon is best
    is that right?
    best regard

    It looks like the primary question here is performance, in which case:Performance is not an issue, so long is it is sufficient. In the case of CSS 11000 and Alteon, performance fall within the same order of magnitude (supporting web sites with several billion hits per day.) F5 does not have sufficient performance, due to platform and OS limitations (I’ve heard as low as 50 connections per second in complex configurations). The Cisco CSM posts up over 10x the performance at 200k flows/second.
    Typically the primary concern is features, then CSS 11000 switches lead with a wide and flexible array of features that are not only helpful to network and web administrators, but well integrated too. CSS 11000 switches offer configuration through CLI, Web GUI, and XML. CSS 11000 collects statistics that can be exported to non-Cisco applications for billing and management.
    The CSS11000 also supports URL load balancing and HTTP header balancing within one content rule, with complex matching. Further, it supports user agent, pragma/no-cache, host field, cookie field, language field, accept, accept charset, accept-encoding, and Connection within the http header field.
    In addition, the CSS11000 matches up to 128 bytes with support of wildcards anywhere within the string.On the other hand, Nortel (Alteon) - HTTP header load balancing is not supported on the same VIP as URL load balancing. Eg. This means that a simulated WAP user cannot be directed to a server while load balancing "normal" browsers to their servers based on URL without using two separate VIPs. Only ONE http header load balance is supported on the entire switch. This limits you to either User Agent (WAP, Netscape, IE, Palm, etc), OR pragma/no cache (do not send the user to cache, allow (the user) to go to the origin), or Host field (allowing you to direct on domain name), or Cookie. Also, Nortel does not support the language field.
    In regards to F5, many of the performance claims are based on HTTP 1.0 requests (most web sites today are not using HTTP 1.0), Many emerging applications rely on HTTP 1.1 rather than HTTP 1.0. Also, the BIG-IP cannot spoof the connection to detect the URL, cannot do NAT on the flow, and cannot maintain states for persistent connections.
    Overall, I think CSS switches are the lowest cost to own, and most effective of all the load balancing platforms on the market.

  • What is the difference (& use) between CBP & MRP & FBP

    Dear Experts,
    Can anyone please tell me what is the difference ( & uses ) between Consumption Based Planning & Material Requirement Planning & Forecast Based Planning?
    how are these linked to each other?
    Instead of sending Big Documents,Please tell me in few lines in a Lay mans Language.
    I read these topics in Martin Murray Book (its the Bible of MM acc. to me) but still did not understand it.
    Can anyone please  explain me.....
    Kind Regards
    Sajid

    Dear Sajid,
    Read This link Must help u to get all details u required
    http://help.sap.com/saphelp_45b/helpdata/en/7d/c2821c454011d182b40000e829fbfe/frameset.htm
    All are Used In Material Requirement Planning(MRP)
    Consumtion base Planning :-   Consumption-based planning procedures are straight forward materials planning procedures with which you can achieve set targets with relatively little effort. Therefore, these planning procedures are used in areas without in-house production and/or in production plants for planning both B- and C-parts and operating supplies.
    The type of order proposal which is automatically generated during materials planning depends on the procurement type of the material. For materials that are produced internally, a planned order is always created. For materials procured externally, the MRP controller has the choice between creating a planned order or a purchase requisition. If he decides to create a planned order, he must then convert it into a purchase requisition and make it available for the purchasing department.
    in MRP there are three Type
    1) Reorder Point Planning
         A) VB - Manual Reorder
         B) VM - Automatic
         C) VV - Forecast
    2) Forecast Based Planning :- Material planning is based on forecasted requirements. This is preferred when the future requirements might not be the same as the past consumption, but to forecast, the past values are considered.
    Like reorder point planning, forecast-based planning operates using historical values and forecast values as future requirements are determined via the integrated forecasting program. However, in contrast to reorder point planning, these values then form the basis of the planning run in forecast-based planning.
    The forecast, which calculates future requirements using historical data, is carried out at regular intervals. This offers the advantage that requirements, which are automatically determined, are continually adapted to suit current consumption needs. If, during the current period, material has already been withdrawn from stock, then the forecast requirement is reduced by the quantity of material that was withdrawn. This means that the quantity of the forecast requirement that has already been used is not included in the planning run again.
    3) Time Phase Planning :-  If a vendor always delivers a material on a particular day of the week, it makes sense to plan this material according to the same cycle in which it is delivered, but displaced by the delivery time. This is possible with the time-phased planning procedure.
    If a particular material is to be planned using this procedure, you must set the MRP type for time-phased planning and you must enter the planning cycle in the material master record. You enter the planning cycle in the form of a planning calendar in the Planning cycle field. You must also define a planned delivery time and the Lot-for-lot order quantity as the lot-size key.
    Regards,
    Pardeep Malik
    Edited by: Pardeep  Malik on Mar 20, 2009 11:58 AM
    Edited by: Pardeep  Malik on Mar 20, 2009 12:10 PM

  • Difference in file size between EM Export and Export using cmd.....

    Hi ,
    I use the following command in a command prompt to export a schema....
    C:\oracle\product\10.2.0\database10g\BIN\exp.exe system/*****@local FILE=C:\FILES.DMP ROWS=Y OWNER=USER_DB LOG=C:\TEST.LOG
    which produces a file around 900KB.
    When i exported on the same database , the same dataschema , both the data and the db objects ... using the EM the file produced was around 4000KB.....
    Why is this difference.....????
    I noticed in the log file of the exported file using EM some additional info... such as :
    'estimated USER_DB.TABLE_NAME 64KB'
    Can this info justify the difference in file size between the two methods..????
    Note: I use OraDb 10g v.2
    thanks a lot
    Simon

    The export done from EM is from Data Pump export.
    The manual export you did is original export.
    Try do a manual data pump export using expdp and compare the size.

  • Performance comparison between using sql and pl/sql for same purpose

    Hi All,
    I have to do some huge inserts into a table from some other tables. I have 2 option:
    Option 1
    ======
    a. Declare a cusor for a query involving all source tables, this will return the data to be populated into target
    b. Use a cursor for loop to loop through all the records in the cursor
    c. for each iteration of the loop, populate target columns, do any calculations/function calls required to populate derived columns, and then insert the resulting record into target table
    Option 2
    ======
    Just write a big single "Insert Into ..... Select ..." statement, doing alll calculations/funtion calls in the select statement generating the source data.
    Now my question is ; which option is fast? and why. This operation is performace critical so I need the option which will run faster. Can anybody help???
    Thanks in Advance.

    user9314072 wrote:
    while the above comments are vaild, you should concider maintainability in you code. Even if you can write the sql it might be the code becomes complex making tuning very dificult, and derade performance.Beg to differ on that. Regardless of complexity of code, SQL is always faster than PL/SQL when dealing with SQL data. The reason for that is that PL/SQL still needs to use SQL anyway for row retrieval, and in addition it needs to copy row data from the buffer cache into the PL/SQL PGA. This is an overhead that does not exist in SQL.
    So if you are processing a 100 million rows with a complex 100 line SQL statement, versus a 100 million rows 100 line PL/SQL procedure, SQL will always be faster.
    It is a trade off, my experiance is large SQL's 100's lines long become hard to manage. You need to ask yourself why there are 100's of line of SQL. This points to an underlying problem. A flaky data model is very likely the cause. Or not using SQL correctly. Many times a 100 line SQL can be changed to a 10 liner by introducing different logic that solves the exact same problem easier and faster (e.g. using analytical SQL, thinking "+out-of-the-box+").
    Also, 100's of line of SQL points to a performance issue always. And it does not matter where you move this code logic to PL/SQL or Java or elsewhere, the performance problem will remain. Moving the problem from SQL to PL/SQL or Java does not reduce the number of rows to process, or make a significant change in the number of CPU instructions to be executed. And there's the above overhead mentioned - pulling SQL data into a client memory segment for processing (an overhead that does not exist using SQL).
    So how do you address this then? Assuming the data model is correct, then there are 2 primary methods to address the 100's of SQL lines and its associated performance problem.
    Modularise the SQL. Make the 100's of lines easier to maintain and understand. This can be done using VIEWS and the SQL WITH clause.
    As for the associated performance issue - materialised views comes to mind as an excellent method to address this type of problem.
    my advice is keep things simple, because soon or later you will need to change the code.I'm all for that - but introducing more moving parts like PL/SQL or Java and ref cursors and bulk fetching and so on.. how does that reduce complexity?
    SQL is the first and best place to solve row crunching problems. Do not be fooled into thinking that you can achieve that same performance using PL/SQL or Java.

  • Is Your Footage Suffering from the Massive Difference in Export Quality Between FCPX

    I read this article today and considering I do all my rendering through Premiere or AME it made me a little concerned. What does Adobe think of this? and has any else experienced this problem?
    Cheers,
    Moja.
    I took this article from: Is Your Footage Suffering from the Massive Difference in Export Quality Between FCPX & Premiere?
    A rational person might assume that the program from which you export your media wouldn't have a noticeable impact on the quality of the final image, especially if the export settings are identical in both programs. A recent test by filmmaker Noam Kroll might just teach us to think twice before making assumptions.
    First, a little bit of background on Kroll's test. Having noticed that exporting from Adobe Media Encoder yielded quicker results than using the same settings and exporting from FCPX, he tended to use Media Encoder for the bulk of his exporting. When a recently exported project came out with some nasty compression artifacts, blocky rendering of certain areas, and a noticeable change in color quality, Kroll put on his detective's hat and tried exporting again from FCPX. To his, and soon to be your, surprise, the exported result from FCPX yielded significantly higher image quality with the EXACT same export and compression settings.
    Don't believe it? Have a look for yourself. According to Kroll, "both FCP X and Premiere Pro were set to output a high quality H.264 file at 10,000 kbps." The image on top was exported from FCPX and the bottom was exported from Premiere Pro.
    Exported from FCPX
    Exported from Premiere Pro
    In the shots above, you'll notice more blocky compression artifacts in the version exported from Premiere, especially on the lower part of the woman's face, and there's a fairly significant reddish hue that's been introduced into the midtones and shadows of the Premiere export. Here's a version of the same shot that is cropped in on the woman's face by 400%. This is where the difference between the two starts to become painfully obvious. Again, FCPX is on top, and Premiere on the bottom.
    Exported from FCPX
    Exported from Premiere Pro
    Here's the conclusion that Kroll came to in his post.
    After seeing this I can confidently say that I will not be compressing to H.264 using Premiere Pro or Adobe Media Encoder any more. [sic] The image from Premiere is so much blockier, less detailed, and muddy looking, not to mention that the colors aren’t at all accurate. In fact I even did another output test later on with Premiere Pro set to 20,000 kbps and FCP X only set to 10,000 kbps and still the FCP X image was noticeably higher quality, so clearly something is up.
    It's really difficult to speculate as to what's going on behind the scenes that's causing such a drastic difference in results between the two programs. However, what is clear is that you should take caution when exporting to h.264 from Premiere and Media Encoder. Regardless of the program that you're using, perform your own tests and make sure that the export process is leaving your media with a visual quality appropriate for the delivery medium.
    The good news here is that Adobe is extremely receptive to feedback from their user base, and their Creative Cloud subscription model allows them to roll out updates with a much higher frequency than they could with the boxed version of the Creative Suite. If more people are experiencing these problems and reporting it to Adobe, chances are that we'll see an update with fixes sometime in the near future. With that said, I have no idea how Adobe handles the technical process of exporting, so it could very well take a complete overhaul of how the program encodes h.264 to fix the problem.

    Well, I did my own little comparison with a shot from my A7s (XAVCS 50mbps) and seeing as I don't have FCP X I used FCP 7. The AME H264 looks nicer than the FCP one in this instance.
    Dropbox - WALKING 444.jpg
    Pro Res 444 from Premiere
    Dropbox - WALKING AME.jpg
    H264 from AME at these settings:
    Dropbox - WALKING FCP.jpg
    H264 from FCP 7 at these settings:

  • E4200 firmware - 1. Your experiences? 2. Performance diffs. between 1.0.00 and 1.0.01?

    I would like to hear from other E4200 owners regarding performance comparisons between the two E4200 firmwares.
    Questions:
    1. Is the new one any more reliable?
    (I have not had any reliability issues with the original firmware which I am still using on my current E4200).
    2. Does it perform better with one or the other (firmware)?
    3. Any other observations you want to share regarding any differences in performance between the two.
    Thanks in advance.
    Hopefully this thread and the feedback it generates will help E4200 owners compare notes with one another.

    Did you call tech support to figure out if your settings (channel etc.) could / should be adjusted?
    Sorry it did not work for you yet, but as far as your comments are concerned, I think the following comparison tests shed some light on the reality of the performance of the two.
    The E4200 crushed the Apple in the smallnetbuilder.com 'throughput vs. location' 2.4 GHz wireless tests, especially in anything but the closest two locations (look at the numbers for locations E and F):
    http://www.smallnetbuilder.com/lanwan/router-charts/graph/58-2_4-ghz-dn/952-airport-extreme-base-sta...
    Just posting this test link to add some perspective since people googling might stumble onto your comments and take them at face value.

  • I still don't totally understand the difference on my iPhone between PhotoStream (with iCloud icon), Photos (the icon) and Albums and Camera Roll (under Photos icon) and and Camera Roll (under Camera icon).  I want to delete photos from my iPhone 4.

    I  don’t  understand the difference on my iPhone between PhotoStream (with iCloud icon), Photos (the icon) and Albums and Camera Roll (under Photos icon) and and Camera Roll (under Camera icon).  They seem to be more or less the same thing, but deleting photos in one place does not delete them elsewhere.
    When I tried to import from my camera to my iMac computer the computer would not recognized 900 of the 920 photos on the camera so I had to (successfully) use the Image Capture application to import them.
    I want to delete photos from my iPhone 4 in order to free up storage (I only have 8GB.)  But in synching I do not want to delete any images from my iPhoto on my computer.
    Is there a way that I can Delete All under Edit in either Photos or Camera on my iPhone and check about 20 ijmages to remain on the iPhone?  I could find no command to Select All and then uncheck the 20 photos I want to keep.
    I have the 8GB iPhone and the iMac with OS 10.9.2 and iPhoto '09 Version 8.1.2.
    Help!

    I have Version 6.1.3 because the clogged storage will not allow me to upload Version 7.1.  Catch 22.  This is basically a storage issue, I believe.  I cannot delete enough stuff on my iPhone to allow me to either back up or upload the new OS 7.  I am unwilling to pay for more storage in iCloud, so I want to delete stuff I don't need.  But I haven't yet figured out how to successfully do that.  I synch my phone and computer.  I want to delete mail on the Mail app and photos and still keep my photos on my computer on iPhoto, but not on my phone.  I seem unable to do that.  Any advice?

  • Difference in the Balance between FS10N and Customer Balances in Local Curr

    Hi,
    When i am trying to match the Balances between FS10N and Customer Balances in Local Currency for the Period 8, we are getting the difference, The reconcilliaton Account was changed on 30.08.2010.
    Please help Us in tracing the differences between FS10N and Customer Balances in Local Currency.
    What could be the possible reasons for the differences..
    Thanks

    Hi Varshani,
    Please use the program/report SAPF070 to compare or reconcile your AR with GL balances. You can use SAPF071 to correct any inconsistencies. Provided below documentation for these programs.
    SAPF070  - Compare Documents and Account Transaction Figures
    Description
    This program compares debit and credit transaction figures in customer, vendor, and G/L accounts with the debit and credit totals from documents posted in the corresponding posting period (accounting reconciliation). The sales totals are also compared for customer and vendor accounts. There is no separate comparison for special G/L transactions.
    A comparison for G/L accounts can be made in company code currency and in parallel currencies (such as group currency). A comparison for customer and vendor accounts can only be made in company code currency.
    After the program has finished, a message is issued to the user that started the program. This message summarizes the results of the reconciliation.
    Output
    The program compares the totals of an account on a periodic basis. If the debit and credit total differs between account and documents, the account is printed with the debit and credit totals and the difference.
    Differences in G/L accounts are shown per transaction currency. The first line shows the amount in local currency, the second line shows the amounts in transaction currency.
    If a document which falls within the selection range is posted during the program run, the program is terminated since a reliable result can no longer be delivered.
    SAPF071 - Adjust Balances after Comparing Documents/Transaction Figures
    Description
    If a financial accounting comparative analysis (SAPF190) or a comparison of documents and transaction figures (SAPF070) shows that there are differences between documents and transaction figures, you can use this program to make an adjustment. The documents form the basis for this adjustment. The program adjusts the (redundant) transaction figures, which are only totals of amounts from documents.
    Requirements
    All of the following listed requirements must be fulfilled:
    1. A financial accounting comparative analysis (SAPF190) or a comparison of documents and transaction figures (SAPF070) must be made, and differences must be found between documents and transaction figures.
    2. There must not be any inconsistent documents found. These are listed in both SAPF070 and SAPF190 as well as in this program.
    3. There cannot be any problems in the other modules. Caution: You have to check this yourself. Financial Accounting may be correct but the other modules may not be, and this will adversely affect the program run.
    4. You can only make the adjustment in the ledgers which are compared by program SAPF070. This is ledger 00 or a user-defined ledger for all parallel local currencies except the group currency. (The program displays these ledgers). You have to adjust any additional ledgers as well as average balance ledgers yourself.
    5. No documents during the period in which you are adjusting transaction figures can be archived. Caution: You must ensure that these documents are not archived by establishing appropriate organizational rules and procedures.
    Only use this program after consulting with SAP or after checking the prerequisites thoroughly.
    You should adjust all differences together for a single company code. By setting the program parameters you can limit the adjustment to G/L currency types or to balances in subledgers.
    It is advisable to execute a test run first, which will list any differences that are found.
    Further notes - Authorizations
    Repair program authorization group (F_005)
    Company code authorization         (F_BKPF_BUK)
    Thanks
    Venkata Ganesh Perumalla
    Edited by: Venkata Ganesh Perumalla on Sep 28, 2010 1:30 PM

  • Performance Diference between debug mode and release mode

    Hi ,
    I have been running an C++ multithreaded application with below technical specification on Production for more than an year.
    OS - Solaris 10 , Opteron
    Compiler - Sun Studio 11
    Bit Mode - 64
    I wanted to know what is the difference in the debug mode of an C++ application compiled with -g option and a release mode compiled with an optimization apart from the size of the binary and availability of debug information in case of a crash.
    1) Does it degrade the performance of the application? If yes by what extent?
    2) Does it use more memory?
    3) Will switching from Debug to Release mode with some optimization coupled with any kind of risk like byte guard etc which could lead to dumps and all? If yes what could those be?
    4) Which optimization level should be used?
    Regards,
    Ankur Verma

    If you compile without any optimization (with or without -g), you will see a noticeable difference in performance in most programs compared to compiling with a reasonable level of optimization. How much difference depends on the nature of the program, and what percentage of time is spent in code regions that can benefit from the code improvements.
    If you add -g, function inlining is disabled, which can dramatically reduce program performance. You can't debug a function that is generated inline, which is why this option disables inlining. (The +d option also disables inlining.)
    If you use -g0 instead of -g, function inlining is preserved.
    If you use -g0 with optimization, you get the advantages of optimization while still allowing limited debugging. (Local variables are not visible.) Beginning with Sun Studio 12 update 1, the current release, -g with optimization enables function inlining, so the effect of (for example) "-O -g" becomes the same as "-O -g0".
    A few optimizations are disabled with -g or -g0, the exact difference depending on the compiler release and patch level. Most programs won't see a difference in performance.
    Since you are running on Solaris 10, you should upgrade to the current release, Sun Studio 12 update 1. You will get many improvements and some new diagnostic tools. The new release is a drop-in replacement for Studio 11. You don't have to recompile any existing binaries, but you will want to recompile to get improved performance.
    [http://developers.sun.com/sunstudio/]

  • Difference of X.509 between wls8.1.2 and wls8.1.3,wls8.1.4

    Hi All
    IHAC,customer want to use CA to implement certification.
    I use wls8.1.3 and wls8.1.4 to config two-way SSL.After completing all configuration,I modified startweblogic.cmd with adding
    -Dweblogic.security.SSL.enforceConstraints=off
    I use a test application,a web application,to read the personal CA(client CA) out .But I find,the personal CA which read out by the application is deficient.
    But if I use wls8.1.2,there is not this problem.
    Whether there are some difference of SSL implementation between wls8.1.2 and wls8.1.3,wls8.1.4?If so,how about wls8.1.5?
    Thank you in advance.
    leeppy.

    Hi,
    From my experience point of view, I can tell you that the client SSL libraries are broken since SP4.
    A program will run perfectly on SP3 and fails on SP4, and 5. This is only a problem on the client side, not server side I think. Indeed, others libraries are working fine with WL8.1 SP5.
    The client libraries send an invalid certification chain. Moreover, there is now a problem in the context when using web service. If you access a WSDL on HTTP and if endpoints are in HTTPS (2WAYSSL), then you have a fatal error on the client side. With SP3 is working fine. So unless you need some bug fixes that can be found in client lib of SP4 & 5, I can just advice you to stay with SP3 client libs, until BEA is releasing a patch. If you access both the wsdl and websercice on HTTPS, and that you do not enforce 2wayssl, you will see no more error, but all trasfers are done over oneway ssl. If you enforce 2waySSL, you will see a fatal error (incomplete client chain).
    Pascal

  • How much real world difference would there be between the 1600MHz memory of a 4,1 Mac Pro and the 800MHz memory of a 3,1 Mac Pro? My main app is multitrack audio with Pro Tools. Thanks so much.

    How much real world performance difference would there be between the 1600MHz memory of a 4,1 Mac Pro and the 800MHz memory of a 3,1 Mac Pro? My main app is multitrack audio with Pro Tools. Thanks so much. The CPU speed of either one would be between 2.8GHz and 3.0GHz.

    What are the differences.... firmware and build, there were tweaks to the PCIe bus itself. As a result 3rd party cards and booting is better.
    Support in 5,1 firmware for more 56xx and W35/36xx processors. Also memory timing.
    The 4,1 was "64-bit boot mode optional" and 5,1 was default. I don't know if there are changes but I assume so, even if it is not reflected elsewhere or in version number.
    I don't know what the prices are but 2009, to buy one today, when the 2010 is $1800.
    The 2008 of course was test bed for 64-bit UEFI and it sure seems even Lion and then ML are not as well engineered - outside of Linc who would be the least likely to have a problem.
    I would assume 2010 has better support for 8GB and even 16GB DIMMs as well as for 1333MHz.
    Nehalem family had only come out in fall 2008 and a lot of work went into making improvements well past 2009.
    If you remember, there were serious heat problems with those and 10.5.7+ up thru 10.6.2 even with iTunes, audio, and hyperthreading and cores hitting and staying in 80*C range. That I assume was both poor code (sleep does not mean poke and ask constantly) as well as changes in SMC and kernel improvements, to work around. Microcode can be patched in firmware, kernel, by drivers and by code, but it is best when the chips and core elements don't need to be.
    If someone is stretched, and can get 2009 for $1200 it might be a fine fit. That year offered the OEM GT120 which isn't really as nice and matched for today both OS and apps that rely on a GPU. And for odd reasons two such 120's don't work well in Lion+ but that is probably minor. Having the 5770 is just "nicer" though.
    There are some articles about trouble booting with PCIe SATA/SAS/SSD and less trouble with 2010. Also support for graphic card and audio I think was one of those "minor" 5770 related support issues. But shows some small changes were made there too.
    I wish someone would come out and pre-announce DDR4 + SATA3 along with PCIe 3.x (for bandwidth and more power per rail) along with say Ivy Bridge-E socket processors was going to be this summer's 3 yr anniversary and to replace the 2010 designed motherboard. But that is what is on Intel's and others drawing boards simmeringn in the pot.

  • Any differences of performance/results for these Date functions?

    Are there any differences, in turns of performances/results between these two queries?
    datediff(yy, dateofbirth, getdate())>50
    dateofbirth < dateadd(yy, -50, getdate())

    Given that this is an Oracle forum, I assume you are expecting an Oracle based answer, so I would have to say there would be absolutely no performance difference between those date functions. Both sets will fail with ORA-00904 within milliseconds. This is due to the fact that those are SQL Server functions.
    John

  • Error while using between operator with sql stmts in obiee 11g analytics

    Hi All,
    when I try to use between operator with two select queries in OBIEE 11g analytics, I'm getting the below error:
    Error Codes: YQCO4T56:OPR4ONWY:U9IM8TAC:OI2DL65P
    Location: saw.views.evc.activate, saw.httpserver.processrequest, saw.rpc.server.responder, saw.rpc.server, saw.rpc.server.handleConnection, saw.rpc.server.dispatch, saw.threadpool.socketrpcserver, saw.threads
    Odbc driver returned an error (SQLExecDirectW).
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 27002] Near <select>: Syntax error [nQSError: 26012] . (HY000)
    can anyone help me out in resolving this issue.

    Hi All,
    Thank u all for ur replies, but I dint the exact solution for what I'm searching for.
    If I use the condition as
    "WHERE "Workforce Budget"."Used Budget Amount" BETWEEN MAX("Workforce Budget"."Total Eligible Salaries") AND MAX("Workforce Budget"."Published Worksheet Budget Amount"",
    all the data will be grouped with the two columns which I'm considering in the condition.
    my actual requirement with this query is to get the required date from a table to generate the report either as daily or weekly or monthly report. If I use repository variables, variables are not getting refreshed until I regenerate the server(which I should not do in my project). Hence I have created a table to hold weekly start and end dates and monthly start and end dates to pass the value to the actual report using between operator.
    please could anyone help me on this, my release date is fast approaching.

Maybe you are looking for

  • How can I delete credit card info in iPhone?

    Big mistake: I added a credit card to my settings in iPhone (after chosing "no credit card" for more than 6 months) but now I can't remove this credit card info anymore. The problem is I can't find the sentence that was "No Credit Card" to choose it

  • Backing up a Mac and a Windows machine?

    Hi, Can Time Capsule be partition to backup a Mac on one of the partitions and a Windows machine on the other partition over the wireless network? and are there any settings I should be aware of? Running OSX ver. 10.5.6 and Windows Vista Home premium

  • Content Tables in CCM 2.0

    Hi guys, Can anyone indicate in which tables in CCM 2.0  the content is stored for: Supplier catalogs Product Catalogs Contract catalogs We installed CCM 2.0 in the same client as SRM 5.0 Thanks, Aart

  • Inventory Management :Delta IP of BF runs with selection from Init IP of BF

    I have an init IP for BF DS which i ran with selection on posting date. I have another IP for delta for BF without any selection criteria. But, when i run the delta IP, it runs with the selection on posting date, the same value that i used for init.

  • SQL Script in ABAP

    Hi Everyone, I have sql script which gives certain output from the data base files, is it possible to execute this script in ABAP, I want to embed this sql script in ABAP and get the output into an internal table how is it possible. Can anyone sugges