Explain sql format and OpenSQL

Hi!
My problem is that if I want to use the "Enter SQL statement" functionality of st05, I have to enter my sql commands in a different format, than in my abap code.
The format in the abap code:
select * from sbook as a inner join scarr as b on acarrid = bcarrid
The format of st05:
SELECT T_00 . *
FROM "SBOOK" T_00 ,
"SCARR" T_01
WHERE ( T_01 . "MANDT" = '000' AND T_00 . "CARRID" = T_01 . "CARRID" ) AND
T_00 . "MANDT" = '000'
1, Is there a converion FM for this? How could I know the "explain format", without running st05 and executing my command?
2, Where can I find a log of SQL commands, that were actually sent to the database engine?
Actually this is not an Oracle specific problem, but I found no "general" database related forum.
thanks,
Tamas

Hello,
Abap code is "generic", it does not depend on the database that is running below ( SQL Server, Oracle, MaxDB, DB2/6 )
But each of the mentioned databases can have a slightly different way of implementing the "SQL". The kernel will perform a transformation between the "generic" abap code and the specific database ( the database interface does it )
I do not think there is an easy way of knowing exactly how the transformation will be done as it does depend on several things, like SAP parameters.
As far as I know, the only way of knowing the exact methond to see the SQL parameters sent to the DB is to enable a trace.
From SAP an SQL trace with ST05.
From the DB, it can also be enable some tracing options.
But tracing everything can be "overkill", therefore, I would only do an ST05 trace for the specific transaction/report to be studied.

Similar Messages

  • Can somebody please explain how to format and then reinstall Mac lion10.7 without cd

    can somebody please explain how to format and then reinstall Mac lion10.7 without cd

    You will need either an Ethernet or Wifi Connection to the Internet - USB Mobile device is not supported.
    If you already have Lion installed you can boot to Recovery with Command R and use the Disk Utility to erase the Macintosh HD and then reinstall the OS from the Mac OS X Utilities.
    You will need the Apple Id used for purchase if the Mac did not ship with Lion installed originally.
    If you want to repartition the HDD you can use the Recovery Disk Assistant to make a recovery USB drive
    http://support.apple.com/kb/DL1433
    http://support.apple.com/kb/HT4848
    As Always - Back up the Mac before erasing unless you are confident there is absolutely nothing on the mac that you might possibly need later.
    If the machine is a 2011 build or later you might be able to boot to Internet Recovery with Command Option R

  • More Guru Winners for February 2015 in the T-SQL category and many others!

    It's been a busy week that also saw the
    TECHNET WIKI SUMMIT 2015
    Then we had the results for
    February's TechNet Guru competition ALSO posted!
    http://blogs.technet.com/b/wikininjas/archive/2015/03/19/technet-guru-february-2015.aspx
    Below is a summary of the medal winners for December. The last column being a few of the comments from the judges.
    Unfortunately, runners up and their judge feedback comments had to be trimmed from THIS post, to fit into the forum's 60,000 character limit, however the full version is available on TechNet Wiki in the link above.
    Some articles only just missed out, so we may be returning to discuss those too, in future blogs.
     BizTalk Technical Guru - February 2015  
    Steef-Jan Wiggers
    BizTalk Server 2013 R2 Instrumenting a custom pipeline component with ETW
    Mandi Ohlinger: "Always a fan of helping our custom pipeline users. Great addition to this group."
    Sandro Pereira: "Images, format, descriptions, code and topic are excellent once again good work Steef-Jan."
    Vignesh Sukumar
    BizTalk BAM (Business Activity Monitoring)
    Sandro Pereira: "Great job on this article! Well explained and nice pictures, however the article format need to be improved and some proofreading is need"
    Mandi Ohlinger: "Welcome to the 'I heart BAM' fan club. Nice job on this topic. A MUST read for new-to-BAM users. "
    Steef-Jan Wiggers
    BizTalk Server 2013 R2 Instrumenting BAM Activity Tracking with ETW
    Sandro Pereira: "Images, format, descriptions, code and topic are excellent once again good work Steef-Jan."
    Mandi Ohlinger: "ETW for BAM Activities - LOVE it. Nice use of the Framework. "
     Forefront Identity Manager Technical Guru - February 2015  
    Wim Beck
    FIM2010: Filter objects on export
    PG: "Simple, targeted but nice article, nice layout. "
    Søren Granfeldt: "Nice. Would be perfect with a complete code sample."
     Microsoft Azure Technical Guru - February 2015  
    saramgsilva
    Azure Mobile Services: How to see the log files in server
    JH: "Log files are one of the most important things in a production environment. This article shows hows you can do that for the Azure Mobile Services in a nice and easy way."
    Alan Carlos: "Great article!"
    Ed Price: "Very useful topic! These are a great set of articles!"
    saramgsilva
    Azure Mobile Services: How to see the WebConfig file published
    Ed Price: "Great detail and fantastic use of images! I love all the in-line links!"
    JH: "Sometimes it is hard to tell when working in a multi-environment what configuration was published to the Server. The article shows short and easy how to do that for the Azure Mobile Services."
     Miscellaneous Technical Guru - February 2015  
    Arleta Wanat
    Retrieve all site mailboxes in your Office 365 tenant
    Durval Ramos: "This article has a well content, images and code that help to understand the solution. It has References and was Translated into more two languages. Good job!"
    Richard Mueller: "Good links. A great tutorial."
    Andy ONeill
    Silverlight: No Need to BringIntoView
    Durval Ramos: "A well formatted article is easier and more pleasant to read. This script is useful"
    Richard Mueller: "Good demonstration of a new feature."
    Chen V
    PowerShell : Enable Auto Reply for Shared Mail Box
    Durval Ramos: " A good solution originated of TechNet Forum. The script and images make it easy to understand and ensure you get the best interest to reader."
    Richard Mueller: "Good documentation of this feature."
     SharePoint 2010 / 2013 Technical Guru - February 2015  
    Geetanjali Arora
    SharePoint Online : Performing Batch Operations using REST API
    KB: "Very well explained article on a new and much awaited feature. Although Andrew Connell already explained this topic in several posts, this article still contains added value."
    Ed Price: "I love the History section. The formatting is amazing. And the References and See Also sections at the bottom are great icing on the cake. This is an important topic that's done incredibly well!"
    Matthew Yarlett
    Using the SpellCheck Webservice with the TinyMCE Richtext Editor and
    AngularJS in Office 365
    KB: "I read this article with growing interest, it contains a lot of added value. Very well and in-depth explanation. "
    Ed Price: "Great scenario! Good use of images, code, detail, and References! Could possibly use a greater breakdown and explanation of the code. This article just gets more and more interesting and valuable as you read it! Great job!"
    Arleta Wanat
    SharePoint Online: Turn on support for multiple content types
    in a list or library using Powershell
    KB: "Really nice, interesting and detailed article!"
    Ed Price: "The Content Types section helps explain this a lot! I also love the downloads at the end. What a fantastic resource!"
     Small Basic Technical Guru - February 2015  
    Nonki Takahashi
    Small Basic: Key Input
    Michiel Van Hoorn: "Great improvement."
    RZ: "Very nice explanation and examples of key input handling"
    Ed Price - MSFT
    Small Basic: The History of the Logo Turtle
    RZ: "Turtle (Logo) was the first programming language for many, including perhaps some of the Small Basic prorammers. Nice article explaining the history."
    Michiel Van Hoorn: "A nice background article and hopefull inspiration for those who want to start in robotics"
    Nonki Takahashi
    Small Basic: TechNet Wiki Article List
    Michiel Van Hoorn: "This is great! Perfect as a local cache of the articles. "
    RZ: "A good example"
     SQL BI and Power BI Technical Guru - February 2015  
    Sylvain PONTOREAU
    PowerBI API in .Net
    RB: "Great walkthrough. Looking forward for the WP8 version of the app ;)"
    PT: "Sylvain, very nice job with this. This is a timely topic about an emerging product that has great potential. This is a very good example of a well-written post on an interesting subject with enough information to be valuable to a
    solution developer. I will personally take time to explore the Power BI API and use your examples. "
     SQL Server General and Database Engine Technical Guru - February 2015  
    Ronen Ariely
    SQL Server Books Online
    AM: "Thank you for sharing this with us. It is quite informative and let us get familiar with BOL after the change from previous versins."
    Ed Price: "Nice! A very helpful introduction to Books Online! It also tells my technical writer friends that their hard work is appreciated! =^)"
    Durval Ramos
    How to Collect Events and Errors on SQL Server
    Ed Price: "Fantastic solution! A great resource that's amazingly well written with formatting, clear parameters, images, References, and a See Also section! And it even comes in Portuguese! Great article!"
    AM: "Thank you for sharing this with us. A good source to learn about our SQL Server instances. "
     System Center Technical Guru - February 2015  
    MarkusEliasson
    Troubleshoot ID 32008: DPM cannot
    protect this SharePoint farm...
    Ed Price: "An important topic that's very clear with great formatting and a good use of an image!"
    t.c.rich
    Managing Priorities of Client Polices and A/V Policies in SCCM
    Ed Price: "I love the descriptions, breakdown of sections, and code formatting! Great article!" 
    Mr X
    How to copy SMSTS.log when a Task Sequence fails in SCCM
    Ed Price: "A very helpful table and a good contribution to the community! Mr X again thinks of important content gaps to fill!"
     Transact-SQL Technical Guru - February 2015  
    Saeid Hasani
    T-SQL: How the Order of Elements in the ORDER BY Clause Implemented in the Output Result
    Durval Ramos: "Very well structured and with examples that clarify how a T-SQL statement can change the data output order."
    Richard Mueller: "Good use of Wiki guidelines and great examples."
    Ronen Ariely
    Free E-Books about SQL and Transact-SQL languages
    Richard Mueller: "An excellent collection and a great idea."
    Durval Ramos: "A good initiative. Very useful !!!"
    Ricardo Lacerda
    Declare Cursor (Transact-SQL) versus Window with Over - Running Totals
    - Accumulated Earnings
    Durval Ramos: "The "Window function" sample was well presented, but it was unclear how the chart was generated."
    Richard Mueller: "A new idea that can be very useful. Grammar needs work"
     Visual Basic Technical Guru - February 2015  
    Emiliano Musso
    Genetic algorithm to solve 2D Mazes in Visual Basic
    MR: "Great article! Love to see an application for AI in a simple game"
    Durval Ramos: "This article is well documented with images and your code clarifying important details. It also has References, a very useful video and your project available for download in "MSDN Code" !"
    Richard Mueller: "Incredible concept and code. Grammar needs work."
    Paul Ishak
    MultiHeadedTrackBar Control
    Durval Ramos: "Very interesting article, with methods and properties well documented. Your project was available in "MSDN Code" which facilitates the understanding of solution."
    Richard Mueller: "Amazing work. Extensive code but with lots of comments. Needs a TOC"
    tommytwotrain
    Using Trigonometry to draw graphic curves in VB.NET part 2.
    MR: "Great continuation. Love the usage of the code for circle text"
    Durval Ramos: "The article is interesting, but It's need to work better commenting about assemblies referenced on project and also structure your content into sections."
    Richard Mueller: "Good tutorial and example code demonstrating basic concepts. Avoid first person."
     Visual C# Technical Guru - February 2015  
    Magnus (MM8)
    C#: Enumerating collections that change
    Jaliya Udagedara: "Great article. Has a thorough and to the point explanation of problem and the solution with code samples. Loved it!"
    Carmelo La Monica: "Very useful and exhaustive about errors at runtime in these circumstances. Congratulations"
    Andy ONeill
    c#: Practical Poly
    Carmelo La Monica: "Fantastic artcle. Very detailed and exhaustive, congratulations ."
    Jaliya Udagedara: "Definitely worth reading this. Explains somewhat advance topic along with a fundamental concept of programming. "
     Wiki and Portals Technical Guru - February 2015  
    Durval Ramos
    Wiki: Microsoft Short URLs Personalized by SXP
    PG: "Nice idea, lots of potential to grow, really needs some more community attention."
    Richard Mueller: "An excellent idea. Good use of Wiki guidelines."
     Windows Phone and Windows Store Apps Technical Guru - February 2015  
    Carmelo La Monica
    Windows Phone 8: control Nokia Maps (Part 3)
    JH: "Part 3 of the series how to work with the Nokia maps control. As the previous articles this one contains a lot of code snippets and some pictures. Good work!"
    Ed Price: "A great topic, a fantastic breakdown of sections with clear descriptions, and a nice mix of code formatting and helpful images! Another stellar article from Carmelo! Great job including the link back at the end to the portal
    article!"
     Windows PowerShell Technical Guru - February 2015  
    Richard Mueller
    Document Your Active Directory Organization
    Alan Carlos: "Wow! Great article, congratulations!!! Very detailed!"
    Chen V: "Excellent Article - I liked return to top as well."
    Ed Price: "Wow! It's like a professional whitepaper! It's a valuable topic that's done with intricate detail! I love the images, diagrams, code blocks, and it ends very well with more resources and Wiki articles! The article just keeps
    digging deeper and deeper! Awesome job on this!"
    DexterPOSH
    PowerShell + REST API : Invoke-RestMethod Gotcha
    Chen V: "Good Article. TOC might have made this more rich! "
    Ed Price: "This is a good topic with some great content. It could benefit from sections and a TOC, as well as a References and See Also sections at the end. The inline links are helpful. Could "
    DexterPOSH
    PowerShell Trick : Search & highlight text in MS Word
    Ed Price: "This is a great solution, with some helpful Q&A in the comments!"
     Windows Presentation Foundation (WPF) Technical Guru - February 2015  
    Andy ONeill
    Lookless Controls
    KJ: "WPF can definitely be confusing when devs first encounter it. Like the way you break it down."
    Ed Price: "Wow! Fantastic explanations that are very clear and deep! The images and code bring it to life!"
    Andy ONeill
    Only One Parent
    KJ: "Same iwith this one, good 101 intro"
    Ed Price: "Another great tip! I love the detail here as well! Those snippets help a lot!"
    Andy ONeill
    Bind to Current Item of Collection
    KJ: "Feel like this topic has a lot of coverage out there, but it can't hurt to hammer on databinding yet one more time :) "
    Ed Price: "Fantastic topic with great execution! Although these could benefit from References and See Also wiki sections at the end, the Inline links help a lot!"
     Windows Server Technical Guru - February 2015  
    Mr X
    Ping for Beginners
    Mark Parris: "A good introduction with additional content."
    JM: "Great article idea and an excellent article that will be useful to many, thanks for your contribution."
    Philippe Levesque: "Good article that show a usefull utility for basic troubleshooting"
    Richard Mueller
    Active Directory: Get-ADFineGrainedPasswordPolicy Default and Extended Properties
    Mark Parris: "An Interesting insight on FGPP and their extended properties."
    JM: "This is a good piece of detailed information about this PowerShell cmdlet, thanks for sharing."
    Philippe Levesque: "Great article ! Illustrating some cmdlet's output when a user got assigned policy versus a user with the default domain policy could be a good idea."
    Richard Mueller
    Active Directory: Get-ADServiceAccount Default and Extended Properties
    Mark Parris: "A useful nugget of information."
    JM: "More very useful information about an AD cmdlet, thanks!"
    Philippe Levesque: "Good article !"
    As mentioned above, runners up and comments were removed from this post, to fit into the forum's 60,000 character limit.
    You will find the complete post, comments and feedback on the
    main announcement post.
    Please join the discussion, add a comment, or suggest future categories.
    If you have not yet contributed an article for this month, and you think you can write a more useful, clever, or better produced wiki article than the winners above,
    THERE'S STILL TIME! :D
    Best regards,
    Pete Laker
    More about the TechNet Guru Awards:
    TechNet Guru Competitions
    #PEJL
    Got any nice code? If you invest time in coding an elegant, novel or impressive answer on MSDN forums, why not copy it over to
    TechNet Wiki, for future generations to benefit from! You'll never get archived again, and
    you could win weekly awards!
    Have you got what it takes o become this month's
    TechNet Technical Guru? Join a long list of well known community big hitters, show your knowledge and prowess in your favoured technologies!

    Congrats to Saeid, Ronen, and Ricardo! Big thank you to all our contributors!
     Transact-SQL Technical Guru - February 2015  
    Saeid Hasani
    T-SQL: How the Order of Elements in the ORDER BY Clause Implemented in the Output Result
    Durval Ramos: "Very well structured and with examples that clarify how a T-SQL statement can change the data output order."
    Richard Mueller: "Good use of Wiki guidelines and great examples."
    Ronen Ariely
    Free E-Books about SQL and Transact-SQL languages
    Richard Mueller: "An excellent collection and a great idea."
    Durval Ramos: "A good initiative. Very useful !!!"
    Ricardo Lacerda
    Declare Cursor (Transact-SQL) versus Window with Over - Running Totals
    - Accumulated Earnings
    Durval Ramos: "The "Window function" sample was well presented, but it was unclear how the chart was generated."
    Richard Mueller: "A new idea that can be very useful. Grammar needs work"
    Also worth a mention were the other entries this month:
    [T-SQL] Retrieve Table List with Number of Rows by
    Emiliano Musso
    Richard Mueller: "Short but sweet solution to basic question."
    Durval Ramos: "A simple T-SQL script, but useful."
    [T-SQL] Search for Missing Values within a Numerical Sequence by
    Emiliano Musso
    Richard Mueller: "Clever solution with good code examples."
    Durval Ramos: "You need add more details about development of the idea and create a "Conclusion" section to easy understanding."
    [T-SQL] Converting Multiple Rows into HTML Format single ROW by
    Maheen Khizar (Bint-e-Adam)
    Durval Ramos: "In some situations, It's need to consume and format HTML tags for a UI, but It's important to remember that Best Practices recommend this formatting process preferably in Presentation Layer"
    Richard Mueller: "A great new idea. Some features need more explanation. Avoid first person."
    Ed Price, Azure & Power BI Customer Program Manager (Blog,
    Small Basic,
    Wiki Ninjas,
    Wiki)
    Answer an interesting question?
    Create a wiki article about it!

  • SQL*Plus and NLS_DATE_FORMAT

    So, I haven't visited this topic in a long time, so I'm trying to refresh my memory on how everything works ...
    We set our NLS_DATE_FORMAT at the system level ... what ... 4 years ago to 'MM/DD/RR'. Despite the fact that I set this to something more commonly-used than the default of 'DD-Mon-RR', we've adopted the standard of always using TO_DATE() with an explicit format, just in case it's ever changed.
    There were some applications that missed the standard, and now that a driver (for ColdFusion) has been updated, these SQL statements are now failing.
    When I started looking into it, I realized that the system-level default of 'MM/DD/RR' should work fine. But, after experimenting in SQL*Plus and TOAD, I am thinking that either:
    1) the system-level format is not being used
    2) and/or there are login scripts which are setting these to something else
    Coincidentally, both SQL*Plus and TOAD return the exact same query results:
    SQL> select *
         from NLS_INSTANCE_PARAMETERS
         where parameter = 'NLS_DATE_FORMAT';
    PARAMETER                      VALUE
    NLS_DATE_FORMAT                MM/DD/RR
    SQL> select *
         from NLS_SESSION_PARAMETERS
         where parameter = 'NLS_DATE_FORMAT';
    PARAMETER                      VALUE
    NLS_DATE_FORMAT                DD-MON-RRSo I looked into the glogin.sql script (which both tools share), and there's nothing mentioned about the NLS_DATE_FORMAT
    -- Copyright (c) 1988, 2003, Oracle Corporation. 
    -- All Rights Reserved.
    -- NAME
    --   glogin.sql
    -- DESCRIPTION
    --   SQL*Plus global login "site profile" file
    --   Add any SQL*Plus commands here that are to
    --   be executed when a user starts SQL*Plus, or
    --   uses the SQL*Plus CONNECT command
    -- USAGE
    --   This script is automatically run
    -- Used by Trusted Oracle
    COLUMN ROWLABEL FORMAT A15
    -- Used for the SHOW ERRORS command
    COLUMN LINE/COL FORMAT A8
    COLUMN ERROR    FORMAT A65  WORD_WRAPPED
    -- Used for the SHOW SGA command
    COLUMN name_col_plus_show_sga FORMAT a24
    COLUMN units_col_plus_show_sga FORMAT a15
    -- Defaults for SHOW PARAMETERS
    COLUMN name_col_plus_show_param FORMAT a36 HEADING NAME
    COLUMN value_col_plus_show_param FORMAT a30 HEADING VALUE
    -- Defaults for SHOW RECYCLEBIN
    COLUMN origname_plus_show_recyc   FORMAT a16 HEADING 'ORIGINAL NAME'
    COLUMN objectname_plus_show_recyc FORMAT a30 HEADING 'RECYCLEBIN NAME'
    COLUMN objtype_plus_show_recyc    FORMAT a12 HEADING 'OBJECT TYPE'
    COLUMN droptime_plus_show_recyc   FORMAT a19 HEADING 'DROP TIME'
    -- Defaults for SET AUTOTRACE EXPLAIN report
    COLUMN id_plus_exp FORMAT 990 HEADING i
    COLUMN parent_id_plus_exp FORMAT 990 HEADING p
    COLUMN plan_plus_exp FORMAT a60
    COLUMN object_node_plus_exp FORMAT a8
    COLUMN other_tag_plus_exp FORMAT a29
    COLUMN other_plus_exp FORMAT a44
    -- Used to alter the TOAD environment so that users do not have to
    -- use the SET DEFINE OFF command prior to compiling code
    -- Charles Forbes 10.17.2005
    SET scan offIf I expressly go into either tool and execute, the following, setting the format to that already delcared at the system-level:
    alter session set nls_date_format = 'MM/DD/RR'Then these SQL statements start running just fine again.
    There's something that I'm missing in my basic understanding of how this works. I assumed that the driver update for ColdFusion perhaps enabled a different "glogin.sql"-type script equivalent for that tool ... until ... I started checking into how the NLS_DATE_FORMAT is supposed to work ... but isn't. Could someone help me clarify where the hole is in my understanding?
    Thanks,
    Chuck

    chuckers wrote:
    What's the difference, then, between NLS_SESSION_PARAMETERS and NLS_INSTANCE_PARAMETERS in my initial post? The glogin.sql script isn't altering the NLS_DATE_FORMAT for the desktop version of SQL*Plus, so I'm perplexed that the SESSION format differs from the INSTANCE format.NLS_SESSION_PARAMETERS are the NLS parameters that are in force for your particular session (i.e. the particular connection you have). Most client applications cause things like NLS_DATE_FORMAT to be set, overriding the NLS_INSTANCE_PARAMETERS. Instance-level NLS settings are most commonly used only for purely back-end processing (i.e. background jobs scheduled via DBMS_JOB or DBMS_SCHEDULER, etc.) 9 times out of 10, the client application is going to override the instance-level paramters.
    I even opened SQL Developer, and got
    select * from nls_session_parameters where parameter = 'NLS_DATE_FORMAT';
    PARAMETER                      VALUE                                   
    NLS_DATE_FORMAT                DD-MON-RR                               
    1 rows selected
    select * from nls_instance_parameters where parameter = 'NLS_DATE_FORMAT'
    PARAMETER                      VALUE                                   
    NLS_DATE_FORMAT                MM/DD/RR                                
    1 rows selected
    That's not unexpected. Java applications are going to use the Java regional properties at least to specify a date format. SQL Developer has a config option for the date format, so it may well be specifying a different format.
    They're all 3 (Toad, SQL*Plus, TOAD) so suspiciously consistent, that I'm questioning some of the fundamentals of the NLS setup.
    I went ahead and looked via SQL*Plus from the server-side, & things are looking more consistently in-line with my expectations:
    [oracle@dvsrvr13 ~]$ sqlplus forbesc@d13
    SQL*Plus: Release 10.1.0.4.0 - Production on Thu Mar 19 12:44:57 2009
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Enter password:
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.1.0.4.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    SQL> select * from nls_session_parameters where parameter = 'NLS_DATE_FORMAT';
    PARAMETER                      VALUE
    NLS_DATE_FORMAT                MM/DD/RR
    SQL> select * from nls_instance_parameters where parameter = 'NLS_DATE_FORMAT';
    PARAMETER                      VALUE
    NLS_DATE_FORMAT                MM/DD/RR
    SQL> exit
    Disconnected from Oracle Database 10g Enterprise Edition Release 10.1.0.4.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    [oracle@dvsrvr13 ~]$ locate glogin.sql
    /u01/app/oracle/product/10.1.0/db_1/sqlplus/admin/glogin.sql
    [oracle@dvsrvr13 ~]$ more /u01/app/oracle/product/10.1.0/db_1/sqlplus/admin/glogin.sql
    -- Copyright (c) 1988, 2003, Oracle Corporation.  All Rights Reserved.
    -- NAME
    --   glogin.sql
    -- DESCRIPTION
    --   SQL*Plus global login "site profile" file
    --   Add any SQL*Plus commands here that are to be executed when a
    --   user starts SQL*Plus, or uses the SQL*Plus CONNECT command
    -- USAGE
    --   This script is automatically run
    -- Used by Trusted Oracle
    COLUMN ROWLABEL FORMAT A15
    -- Used for the SHOW ERRORS command
    COLUMN LINE/COL FORMAT A8
    COLUMN ERROR    FORMAT A65  WORD_WRAPPED
    -- Used for the SHOW SGA command
    COLUMN name_col_plus_show_sga FORMAT a24
    COLUMN units_col_plus_show_sga FORMAT a15
    -- Defaults for SHOW PARAMETERS
    COLUMN name_col_plus_show_param FORMAT a36 HEADING NAME
    COLUMN value_col_plus_show_param FORMAT a30 HEADING VALUE
    -- Defaults for SHOW RECYCLEBIN
    COLUMN origname_plus_show_recyc   FORMAT a16 HEADING 'ORIGINAL NAME'
    COLUMN objectname_plus_show_recyc FORMAT a30 HEADING 'RECYCLEBIN NAME'
    COLUMN objtype_plus_show_recyc    FORMAT a12 HEADING 'OBJECT TYPE'
    COLUMN droptime_plus_show_recyc   FORMAT a19 HEADING 'DROP TIME'
    -- Defaults for SET AUTOTRACE EXPLAIN report
    COLUMN id_plus_exp FORMAT 990 HEADING i
    COLUMN parent_id_plus_exp FORMAT 990 HEADING p
    COLUMN plan_plus_exp FORMAT a60
    COLUMN object_node_plus_exp FORMAT a8
    COLUMN other_tag_plus_exp FORMAT a29
    COLUMN other_plus_exp FORMAT a44
    [oracle@dvsrvr13 ~]$ So in all, I'm just perplexed by the differences.
    --=cfI'm not surprised that SQL*Plus from the Unix database server is going to have session-level settings that match the instance-level settings because there is probably nothing set in the Unix environment that would override the instance-level settings. There is probably no NLS_LANG or NLS_DATE_FORMAT set as environment variables and probably no central place to look for regional settings. Most applications, particularly Windows and Java apps, are going to have multiple places to look for that sort of information.
    Justin

  • Need Guide to create a table in SQL Server and Process data for JDBC

    Dear All,
    Scenario:JDBC to JDBC
    I need to practice JDBC to JDBC scenario and for that i need to create a table in SQL server for sender ,receiver and update  i have installed SQL Server and no idea about creation of table and Connection string for PI.
    I want you to explain each and every step for the Table Creation ,Driver and connection string.
    Thanks in Advance.

    Try searchin in the forum and then google. This forum is not for teaching the basics.
    VJ

  • Performance between SQL Statement and Dynamic SQL

    Select emp_id
    into id_val
    from emp
    where emp_id = 100
    EXECUTE IMMEDIATE
    'Select '|| t_emp_id ||
    'from emp '
    'where emp_id = 100'
    into id_valWill there be more impact in performance while using Dynamic SQL?

    CP wrote:
    Will there be more impact in performance while using Dynamic SQL?All SQLs are parsed and executed as SQL cursors.
    The 2 SQLs (dynamic and static) results in the exact same SQL cursor. So both methods will use an identical cursor. There are therefore no performance differences ito of how fast that SQL cursor will be.
    If an identical SQL cursor is not found (a soft parse), the SQL engine needs to compile the SQL source code supplied, into a SQL cursor (a hard parse).
    Hard parsing burns a lot of CPU cycles. Soft parsing burns less CPU cycles and is therefore better. However, no parsing at all is the best.
    To explain: if the code creates a cursor (e.g. INSERT INTO tab VALUES( :1, :2, :3 ) for inserting data), it can do it as follows:
    while More Data Found loop
      parse INSERT cursor
      bind variables to INSERT cursor
      execute INSERT cursor
      close INSERT cursor
    end loopIf that INSERT cursor does not yet exists, it will be hard parsed and a cursor created. Each subsequent loop iteration will result in a soft parse.
    However, the code will be far more optimal as follows:
    parse INSERT cursor
    while More Data Found loop
      bind variables to INSERT cursor
      execute INSERT cursor
    end loop
    close INSERT cursorWith this approach the cursor is parsed (hard or soft), once only. The cursor handle is then used again and again. And when the application is done inserting data, the cursor handle is released.
    With dynamic SQL in PL/SQL, you cannot really follow the optimal approach - unless you use DBMS_SQL (a complex cursor interface). With static SQL, the PL/SQL's optimiser can kick in and it can optimise its access to the cursors your code create and minimise parsing all together.
    This is however not the only consideration when using dynamic SQL. Dynamic SQL makes coding a lot more complex. The SQL code can now only be checked at execution time and not at development time. There is the issue of creating shareable SQL cursors using bind variables. There is the risk of SQL injection. Etc.
    So dynamic SQL is seldom a good idea. And IMO, the vast majority of people that post problems here relating to dynamic SQL, are using dynamic SQL unnecessary. For no justified and logical reasons. Creating unstable code, insecure code and non-performing code.

  • SQL Report not showing data - available in SQL Workshop and SQL Developer

    I am having an issue with developing a SQL Report in APEX 3.2.1. I run the code in both SQL developer and SQL Workshop and I get data pulled back (both against my development environment). When I run the same code in a SQL Report region, it returns no data available. Does anyone have any idea what would be causing this? Other regions on the page accessing different tables in the same schema return data without issue. Any help would be appreciated.
    Thanks
    Freddie

    Could you explain the last comment a bit more. Here is a bit more info just in case I touch on the info with it. The db schema is BPAMGR, the Workspace is BPAMGR. We use the same schema for all of our reporting. All of our tables are in the same schema. We don't use any tables outside of this schema. Our APEX workspace has been associated to only this schema. The tables are able to be queried by SQL Workshop in the same APEX instance that the report application is under.
    Freddie

  • Restored our SSRS 2008 R2 from one server to another; Dates are in UK format and not US

    We have restored our SSRS 2008 R2 from one server to another. The original server was in US locale/culture. The new server was in UK locale/culture, when the restore happened. However it should have been in US locale/culture. We have made this change and
    new reports work OK.
    However existing reports (i.e. saved by a user) are still trying to use the US dates in a UK format and as a result throwing a date cant be below 1753 error.
    Has anyone seen this issue before or have any idea what we need to do to fix it?
    Thanks
    Kimberlad

     Hi Kimberlad
    Have you checked any changes in the Collation? 
    Please verify Reporting Server Databases and Server Collation is there any mismatch with your Source Server.
    and also please post complete Error message..
    Nag Pal MCTS/MCITP (SQL Server 2005/2008) :: Please Mark Answer/vote if it is helpful ::

  • Deliver a report in XML format and save it to local file system on the server

    We have OBIEE 10.1.3.4 on Redhat linux. We want to have generate a report in XML format and saved it to the server's file system. Did not realize this is s difficult task. Basically
    1) How to create a XML report? It is not listed in the output items of a layout template.
    2) How to deliver XML the report  to local file system. Look into the Delivery section of Admin page. FTP should be the best choice, but does that means that one need to install and run ftp server on the BI server box?
    Thanks

    Hi,
    Since I still have problems on this subject, I would like to share on how it progresses.
    Currently I have a problem of timeout in step "Truncate XML Schema" with the URL that I mentioned above.
    The exact error is the following : 7000 : null : com.sunopsis.sql.l: Oracle Data Integrator TimeOut : connection with URL [...]
    The connection test is still OK.
    I tried to increase the value in the user's pref but there's no change.

  • How to Perform Forced Manual Failover of Availability Group (SQL Server) and WSFC (Windows Server Failover Cluster)

    I have a scenario with the three nodes with server 2012 standard, each running an instance of SQL Server 2012 enterprise, participate in a
    single Windows Server Failover Cluster (WSFC) that spans two data centers.
    If the nodes in the primary data center are unavailable due to data center outage. Then how I can able to access node in the WSFC (Windows Server Failover Cluster) in the secondary disaster recovery data center automatically with some script.
    I want to write script that can be able to check primary data center by pinging some IP after every 5 or 10 minutes.
    If that IP is unable to respond then script can be able to Perform Forced Manual Failover of Availability Group (SQL Server) and WSFC (Windows Server Failover Cluster)
    Can you please guide me for script writing for automatic failover in case of primary datacenter outage?

    please post you question on failover clusters in the cluster forum.  THey will explain how this works and point you at scipts.
    You should also look in the Gallery for cluster management scripts.
    ¯\_(ツ)_/¯

  • What is SQL Trace and How to Use it .

    Dear Experts .
    1.) May You Please tell me What is the Purpose of SQL-Trace and How to use it ?
    2.) What is purpose of T-codes SE30 and ST22 ?
    Please it is urgent ...
    Regards :  Rajneesh

    Hi
    SQL Trace transaction ST05: The trace list has many lines that are not related to the SELECT statement in the ABAP program. This is because the execution of any ABAP program requires additional administrative SQL calls. To restrict the list output, use the filter introducing the trace list.
    The trace list contains different SQL statements simultaneously related to the one SELECT statement in the ABAP program. This is because the R/3 Database Interface - a sophisticated component of the R/3 Application Server - maps every Open SQL statement to one or a series of physical database calls and brings it to execution. This mapping, crucial to R/3s performance, depends on the particular call and database system. For example, the SELECT-ENDSELECT loop on a particular database table of the ABAP program would be mapped to a sequence PREPARE-OPEN-FETCH of physical calls in an Oracle environment.
    The WHERE clause in the trace list's SQL statement is different from the WHERE clause in the ABAP statement. This is because in an R/3 system, a client is a self-contained unit with separate master records and its own set of table data (in commercial, organizational, and technical terms). With ABAP, every Open SQL statement automatically executes within the correct client environment. For this reason, a condition with the actual client code is added to every WHERE clause if a client field is a component of the searched table.
    To see a statement's execution plan, just position the cursor on the PREPARE statement and choose Explain SQL. A detailed explanation of the execution plan depends on the database system in use.
    Run time analysis transaction SE30 :This transaction gives all the analysis of an ABAP program with respect to the database and the non-database processing. 
    STEPS
    Run time analysis transaction SE30
    In Transaction SE30, fill in the transaction name or the program name which needs to be analyzed for performance tuning.
    For our case, let this be “ZABAP_PERF_TUNING”
    After giving the required inputs to the program, execute it. After the final output list has been displayed, PRESS the “BACK” button.
    On the original SE30 screen, now click on “ANALYZE” button.
    The percentage across each of the areas ABAP/ Database/System shows the percentage of total time used for those areas and load on these areas while running the program . The lesser the database load faster the program runs.
    SQL Trace – ST05
    Starting the Trace:
    To analyze a trace file, do the following:
    Choose the menu path Test  Performance Trace in the ABAP Workbench or go to Transaction ST05. The initial screen of the test tool appears. In the lower part of the screen, the status of the Performance Trace is displayed. This provides you with information as to whether any of the Performance Traces are switched on and the users for which they are enabled. It also tells you which user has switched the trace on.
    Using the selection buttons provided, set which trace functions you wish to have switched on (SWL trace, enqueue trace, RFC trace, table buffer trace).
    If you want to switch on the trace under your user name, choose Trace on. If you want to pass on values for one or several filter criteria, choose Trace with Filter.
    Typical filter criteria are: the name of the user, transaction name, process name, and program name.
    Now run the program to be analyzed.
    Stopping the Trace:
    To deactivate the trace:
    Choose Test Performance Trace in the ABAP Workbench. The initial screen of the test tool appears. It contains a status line displaying the traces that are active, the users for whom they are active, and the user who activated them.
    Select the trace functions that you want to switch off.
    Choose Deactivate Trace. If you started the trace yourself, you can now switch it off immediately. If the performance trace was started by a different user, a confirmation prompt appears before deactivation-
    Analyzing a Sample trace data: PREPARE: Prepares the OPEN statement for use and determines the access method.
    OPEN: Opens the cursor and specifies the selection result by filling the selection fields with concrete values.
    FETCH: Moves the cursor through the dataset created by the OPEN operation. The array size displayed beside the fetch data means that the system can transfer a maximum package size of 392 records at one time into the buffered area.

  • Pull data from SQL Table and display it in mail

    I have a requirement to pull the data from SQL table and send it in email.  Currently I am sending the hard coded info in email but is it possible to pull some data from SQL Table and than format it and send it across in the same email? 
    Can you guide me with steps on this.
    Neil

    There are several ways to do this.  First is to populate a file in a data flow and then send that as an attachment in the send mail task. 
    As far as including the results in the email body this becomes a bit trickier.  To use a variable you would need to use an SSIS variable type of
    Object, this is similar to a collection in .NET.  The problem once the object is populated is that it isn't like a readable result set, but again more like an array or a collection.  There is no native method to take the object variable and
    specify .ToString() or cast its results as text.  You would need to iterate through each row and append it to another variable of type string, this could be done with a script task or ForEach container.
    Also you mentioned formatting the results.  What type of formatting were you looking for.  A limitation of the SMTP send mail task is that the message body doesn't support HTML so if you were looking at creating a table within the mail body you
    would have to use a script task or a custom component
    David Dye My Blog

  • Count number of rows from oracle and sql database and generate a report

    Hi All,
    Can someone help me in writing a java program for the following scenario?
    1. Read the number of rows available from oracle table and print the total row count in a Report.txt text file.
    2. Read the number of rows inserted in a sql database (after a specific process-just an information) and print the total row count in the same text file Report.txt .
    3. Read the Error Log file (which is generated after a specific process say it has success and failed report for each iterations) and print the number of success and number of failure in the same text file Report.txt .
    I need the final Report.txt file in the following format:
    1. Oracle table <table name> has 500000 rows.
    2. After completion of the specific process 300000 rows were added to SQL table <table name>
    Error Log Report:
    300000 successfull entries and 200000 failed entries were found from the Error Log file.

    Thanks for your immediate reply.
    I'm just a beginner in java so if i make any mistake please correct and excuse me. :)
    This is the code i have for connecting to two different database.
    package connectDatabase;
    * @author
    import java.sql.*;
    public class ConnectTo
         public void OracleDB()
              Connection dbconn;
              try {
                DriverManager.registerDriver(new oracle.jdbc.driver.OracleDriver());
                String connString="jdbc:oracle:thin:@SYS_IP:1521:oracl";
                dbconn = DriverManager.getConnection(connString, "uname","pwd" );
            catch(SQLException sqlex)
                 sqlex.printStackTrace();
            catch(Exception excp)
                excp.printStackTrace();
         public void SqlDB()
              Connection conn;
              try { 
                Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); 
                String url = "jdbc:odbc:connectDB"; 
                conn = DriverManager.getConnection(url,"uname","pwd");  
              catch (Exception e)
                System.err.println("An Exception occured! " +e.getMessage()); 
    }I'm just codding the second half which is for connecting to oracle databse and calculates the row count and displayes in the command prompt.
    then connects to SQL database and counts successfull insert in the table.(row count) and displayes on the command prompt.
    can you simplify the above code so that i can call the oracleDB() method and SqlDB() method seperately from another calss file using the object of this class?
    I'm ok if the report can be seen in a command prompt. FInally i need to calculate the success and failure count from the log file. will let you know once i'm done with codding.

  • Can HTML-based reports be built in BLS via an SQL Query and XSLT?

    Hello xMII experts,
    I have already built a report in xMII which uses XSLT to provide group/sum totals in a web browser. However in a new project, the report must run at certain times and possibly when certain signals become true in the process.
    It appears that BLS is a good choice to achieve this and my proposed Transaction was:
    1. SQL Query Action(uses the same QueryTemplate as in xMII)
    2. XSLTransform Action on the resulting XML resultset (The .xsl file contains HTML which is the original used in xMII to produce the report there)
    3. HTML Loader action with the resulting output of the transformation
    I have now got some output in the resulting HTML file - however it omits all XSLT code - and I am left with an empty HTML shell but for a few images.
    This indicates that perhaps no SQL data was ever returned.
    I have therefore two questions:
    1. How can I check if the SQL returned data?
    2. Is it possible to deliver data to a HTML file directly after an XSLTransformation.
    Looking forward to your responses
    Best Regards
    Robert Sales

    Thank you for the replies.
    I am a little closer the result I need - however I think I need to explain what I had and what I need a little better.
    Before BLS
    1 xMII report page (.irpt extension) with two iCalendar applets (start/end date) and a set of buttons (1 for each report)
    Upon clicking on a button the two dates are passed into an .irpt file, and via a servlet an SQL QueryTemplate and a XSL DisplayTemplate are used to build the report.
    The HTML is embedded in the XSL file - thereby generating the report direct in the web browser.
    - This all works fine
    With BLS
    A transaction which uses a modified SQL QueryTemplate (no date parameters) passing the results to an XML file. This works.
    Now when I click on the button in my xMII screen the .irpt file is called with no Date parameters and the xAcute QueryTemplate called with the XSL DisplayTemplate. The irpt file has <html> and <body> tags with XSL file in the Servlet call providing the tables and data extraction from the XML.
    I have no additional HTML file so I placed the iframe tag inside the XSL file - but it refers to the .irpt file - this doesn't sound right!
    I do get a little output in the web browser but it still omits all XSLT code.
    One more point - The Transaction can be scheduled and run as required - but I need the entire report to created and stored for viewing at a later date. Will a servlet tag running inside an .irpt file achieve this?
    Sorry for the chaotic writing here - but I must leave the office.
    Regards
    Robert Sales

  • SQL Tuning and OPTIMIZER - Execution Time with  " AND col .."

    Hi all,
    I get a question about SQL Tuning and OPTIMIZER.
    There are three samples with EXPLAIN PLAN and execution time.
    This "tw_pkg.getMaxAktion" is a PLSQL Package.
    1.) Execution Time : 0.25 Second
    2.) Execution Time : 0.59 Second
    3.) Execution Time : 1.11 Second
    The only difference is some additional "AND col <> .."
    Why is this execution time growing so strong?
    Many Thanks,
    Thomas
    ----[First example]---
    Connected to Oracle Database 10g Enterprise Edition Release 10.2.0.3.0
    Connected as dbadmin2
    SQL>
    SQL> EXPLAIN PLAN FOR
      2  SELECT * FROM ( SELECT studie_id, tw_pkg.getMaxAktion(studie_id) AS max_aktion_id
      3                    FROM studie
      4                 ) max_aktion
      5  WHERE max_aktion.max_aktion_id < 900 ;
    Explained
    SQL> SELECT * FROM TABLE(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 3201460684
    | Id  | Operation            | Name        | Rows  | Bytes | Cost (%CPU)| Time
    |   0 | SELECT STATEMENT     |             |   220 |   880 |     5  (40)| 00:00:
    |*  1 |  INDEX FAST FULL SCAN| SYS_C005393 |   220 |   880 |     5  (40)| 00:00:
    Predicate Information (identified by operation id):
       1 - filter("TW_PKG"."GETMAXAKTION"("STUDIE_ID")<900)
    13 rows selected
    SQL>
    Execution time (PL/SQL Developer says): 0.25 seconds
    ----[/First]---
    ----[Second example]---
    Connected to Oracle Database 10g Enterprise Edition Release 10.2.0.3.0
    Connected as dbadmin2
    SQL>
    SQL> EXPLAIN PLAN FOR
      2  SELECT * FROM ( SELECT studie_id, tw_pkg.getMaxAktion(studie_id) AS max_aktion_id
      3                    FROM studie
      4                 ) max_aktion
      5  WHERE max_aktion.max_aktion_id < 900
      6    AND max_aktion.max_aktion_id <> 692;
    Explained
    SQL> SELECT * FROM TABLE(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 3201460684
    | Id  | Operation            | Name        | Rows  | Bytes | Cost (%CPU)| Time
    |   0 | SELECT STATEMENT     |             |    11 |    44 |     6  (50)| 00:00:
    |*  1 |  INDEX FAST FULL SCAN| SYS_C005393 |    11 |    44 |     6  (50)| 00:00:
    Predicate Information (identified by operation id):
       1 - filter("TW_PKG"."GETMAXAKTION"("STUDIE_ID")<900 AND
                  "TW_PKG"."GETMAXAKTION"("STUDIE_ID")<>692)
    14 rows selected
    SQL>
    Execution time (PL/SQL Developer says): 0.59 seconds
    ----[/Second]---
    ----[Third example]---
    SQL> EXPLAIN PLAN FOR
      2  SELECT * FROM ( SELECT studie_id, tw_pkg.getMaxAktion(studie_id) AS max_aktion_id
      3                    FROM studie
      4                 ) max_aktion
      5  WHERE max_aktion.max_aktion_id < 900
      6    AND max_aktion.max_aktion_id <> 692
      7    AND max_aktion.max_aktion_id <> 392;
    Explained
    SQL> SELECT * FROM TABLE(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 3201460684
    | Id  | Operation            | Name        | Rows  | Bytes | Cost (%CPU)| Time
    |   0 | SELECT STATEMENT     |             |     1 |     4 |     6  (50)| 00:00:
    |*  1 |  INDEX FAST FULL SCAN| SYS_C005393 |     1 |     4 |     6  (50)| 00:00:
    Predicate Information (identified by operation id):
       1 - filter("TW_PKG"."GETMAXAKTION"("STUDIE_ID")<900 AND
                  "TW_PKG"."GETMAXAKTION"("STUDIE_ID")<>692 AND
                  "TW_PKG"."GETMAXAKTION"("STUDIE_ID")<>392)
    15 rows selected
    SQL>
    Execution time (PL/SQL Developer says): 1.11 seconds
    ----[/Third]---Edited by: thomas_w on Jul 9, 2010 11:35 AM
    Edited by: thomas_w on Jul 12, 2010 8:29 AM

    Hi,
    this is likely because SQL Developer fetches and displays only limited number of rows from query results.
    This number is a parameter called 'sql array fetch size', you can find it in SQL Developer preferences under Tools/Preferences/Database/Advanced tab, and it's default value is 50 rows.
    Query scans a table from the beginning and continue scanning until first 50 rows are selected.
    If query conditions are more selective, then more table rows (or index entries) must be scanned to fetch first 50 results and execution time grows.
    This effect is usually unnoticeable when query uses simple and fast built-in comparison operators (like = <> etc) or oracle built-in functions, but your query uses a PL/SQL function that is much more slower than built-in functions/operators.
    Try to change this parameter to 1000 and most likely you will see that execution time of all 3 queries will be similar.
    Look at this simple test to figure out how it works:
    CREATE TABLE studie AS
    SELECT row_number() OVER (ORDER BY object_id) studie_id,  o.*
    FROM (
      SELECT * FROM all_objects
      CROSS JOIN
      (SELECT 1 FROM dual CONNECT BY LEVEL <= 100)
    ) o;
    CREATE INDEX studie_ix ON studie(object_name, studie_id);
    ANALYZE TABLE studie COMPUTE STATISTICS;
    CREATE OR REPLACE FUNCTION very_slow_function(action IN NUMBER)
    RETURN NUMBER
    IS
    BEGIN
      RETURN action;
    END;
    /'SQL array fetch size' parameter in SQLDeveloper has been set to 50 (default). We will run 3 different queries on test table.
    Query 1:
    SELECT * FROM ( SELECT studie_id, very_slow_function(studie_id) AS max_aktion_id
                         FROM studie
                  ) max_aktion
    WHERE max_aktion.max_aktion_id < 900
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      1.22       1.29          0       1310          0          50
    total        3      1.22       1.29          0       1310          0          50
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 93  (TEST)
    Rows     Row Source Operation
         50  INDEX FAST FULL SCAN STUDIE_IX (cr=1310 pr=0 pw=0 time=355838 us cost=5536 size=827075 card=165415)(object id 79865)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
         50   INDEX   MODE: ANALYZED (FAST FULL SCAN) OF 'STUDIE_IX' (INDEX)Query 2:
    SELECT * FROM ( SELECT studie_id, very_slow_function(studie_id) AS max_aktion_id
                         FROM studie
                  ) max_aktion
    WHERE max_aktion.max_aktion_id < 900
          AND max_aktion.max_aktion_id > 800
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.01          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      8.40       8.62          0       9351          0          50
    total        3      8.40       8.64          0       9351          0          50
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 93  (TEST)
    Rows     Row Source Operation
         50  INDEX FAST FULL SCAN STUDIE_IX (cr=9351 pr=0 pw=0 time=16988202 us cost=5552 size=41355 card=8271)(object id 79865)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
         50   INDEX   MODE: ANALYZED (FAST FULL SCAN) OF 'STUDIE_IX' (INDEX)Query 3:
    SELECT * FROM ( SELECT studie_id, very_slow_function(studie_id) AS max_aktion_id
                         FROM studie
                  ) max_aktion
    WHERE max_aktion.max_aktion_id = 600
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.01       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1     18.72      19.16          0      19315          0           1
    total        3     18.73      19.16          0      19315          0           1
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 93  (TEST)
    Rows     Row Source Operation
          1  INDEX FAST FULL SCAN STUDIE_IX (cr=19315 pr=0 pw=0 time=0 us cost=5536 size=165415 card=33083)(object id 79865)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
          1   INDEX   MODE: ANALYZED (FAST FULL SCAN) OF 'STUDIE_IX' (INDEX)Query 1 - 1,29 sec, 50 rows fetched, 1310 index entries scanned to find these 50 rows.
    Query 2 - 8,64 sec, 50 rows fetched, 9351 index entries scanned to find these 50 rows.
    Query 3 - 19,16 sec, only 1 row fetched, 19315 index entries scanned (full index).
    Now 'SQL array fetch size' parameter in SQLDeveloper has been set to 1000.
    Query 1:
    SELECT * FROM ( SELECT studie_id, very_slow_function(studie_id) AS max_aktion_id
                         FROM studie
                  ) max_aktion
    WHERE max_aktion.max_aktion_id < 900
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1     18.35      18.46          0      19315          0         899
    total        3     18.35      18.46          0      19315          0         899
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 93  (TEST)
    Rows     Row Source Operation
        899  INDEX FAST FULL SCAN STUDIE_IX (cr=19315 pr=0 pw=0 time=20571272 us cost=5536 size=827075 card=165415)(object id 79865)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
        899   INDEX   MODE: ANALYZED (FAST FULL SCAN) OF 'STUDIE_IX' (INDEX)Query 2:
    SELECT * FROM ( SELECT studie_id, very_slow_function(studie_id) AS max_aktion_id
                         FROM studie
                  ) max_aktion
    WHERE max_aktion.max_aktion_id < 900
          AND max_aktion.max_aktion_id > 800
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1     18.79      18.86          0      19315          0          99
    total        3     18.79      18.86          0      19315          0          99
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 93  (TEST)
    Rows     Row Source Operation
         99  INDEX FAST FULL SCAN STUDIE_IX (cr=19315 pr=0 pw=0 time=32805696 us cost=5552 size=41355 card=8271)(object id 79865)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
         99   INDEX   MODE: ANALYZED (FAST FULL SCAN) OF 'STUDIE_IX' (INDEX)Query 3:
    SELECT * FROM ( SELECT studie_id, very_slow_function(studie_id) AS max_aktion_id
                         FROM studie
                  ) max_aktion
    WHERE max_aktion.max_aktion_id = 600
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1     18.69      18.84          0      19315          0           1
    total        3     18.69      18.84          0      19315          0           1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 93  (TEST)
    Rows     Row Source Operation
          1  INDEX FAST FULL SCAN STUDIE_IX (cr=19315 pr=0 pw=0 time=0 us cost=5536 size=165415 card=33083)(object id 79865)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
          1   INDEX   MODE: ANALYZED (FAST FULL SCAN) OF 'STUDIE_IX' (INDEX)And now:
    Query 1 - 18.46 sec, 899 rows fetched, 19315 index entries scanned.
    Query 2 - 18.86 sec, 99 rows fetched, 19315 index entries scanned.
    Query 3 - 18.84 sec, 1 row fetched, 19315 index entries scanned.

Maybe you are looking for