View the generated sql for fetching a repository item

Hi guys.
I am interested in viewing the sql that ATG framework generates to fetch a primary repository item. I would like to use this generated sql, directly on the database and avoid me the pain of having to join multiple tables to get one repository item, basically looking to avoid having to write complex sql queries to fetch information for a repository item as ATG application would view it.
I know there is a switch somewhere to turn it on, but an example would be great. Thanks!

Turn on the logging debug on for the repository.
Peace
Shaik

Similar Messages

  • How to view the pl/sql for dbms_stats

    Hi,
    Please can you help?
    From executing the code execute dbms_stats.gather_database_stats; , I have the following error:
    BEGIN dbms_stats.gather_database_stats; END;
    ERROR at line 1:
    ORA-01476: divisor is equal to zero
    ORA-06512: at "SYS.DBMS_STATS", line 13336
    ORA-06512: at "SYS.DBMS_STATS", line 13682
    ORA-06512: at "SYS.DBMS_STATS", line 13826
    ORA-06512: at "SYS.DBMS_STATS", line 13790
    ORA-06512: at line 1
    I wanted to see the pl/sql code for the package body to see where the +'divisor is equal to zero'+ could be coming from.
    I'm logged in to the database as SYS and ran the following query :
    select text from all_source
    where name = 'DBMS_STATS'
    and type = 'PACKAGE_BODY';
    what I get is unreadable.
    Database version is 10.2.0.3
    thank you.

    Hello,
    You may consult the following Notes from My Oracle Support:
    Getting ORA-01476 during execution of DBMS_STATS.GATHER_SCHEMA_STATS (Doc ID 464440.1) Or, as you are in 10.2.0.3, you may also apply the Patch Set *10.2.0.4* as it seems to be a Bug.
    Hope this help.
    Best regards,
    Jean-Valentin

  • OBIEE 11g "WITH SAWITH0 AS" subquery factoring clause in the generated sql

    I've observed that the OBIEE 11g generates in the query log physical query using the WITH (sub-query factoring) clause to make the generated sql elegantly readable. This is great! Thanks for the developers. However I have some questions about this.
    __Background__
    Oracle Database' default behaviour is that if you have only one sub-query in the WITH section, it executes it as an in-line view and does not materialize it before the main sql is executed. If you have more than one, by default the database engine materializes all of them in the order of the definition. In some cases this can completely blow up the SGA and make the query never ending. To divert this behaviour you can apply two hints that work both in inline views and in sub-queries as well: /*+ MATERIALIZE */ and /*+ INLINE*/, however Analytics 11g does not seem to have hint capabilities at the logical table level, only at physical table level.
    If we go with the current defaults, developers not aware of this feature can bump into serious performance issues for the sake of some syntax candy at the generated sql level, I'm afraid.
    __Questions__
    * Is it possible to turn the Analytics server not to use WITH but use inline views instead?
    * Is there any way to sneak in some hints that would put the /*+ INLINE */ hint to the appropriate place in the generated sub-queries if needed
    * Does the Oracle Database have any initialization parameter that can influence this sub-query factoring behavior and divert from the default?

    The WITH statement is not added to make the query more elegant, it's added for performance reasons. If your queries take long to run then you may have a design issue. In a typical DWH DB SGA needs to be seriously increased since the queries ran are much larger and complex than on an OLTP DB. In any case you can disable the WITH statement in the Admin Tool by double clicking on your database object on the physical layer and going to the Features tab. The feature is called WITH_CLAUSE_SUPPORTED.

  • OBIEE 11g additional table in the generated sql

    Hi gurus,
    In OBIEE 11g RPD, we have a fact table whose source table is consisted by table A, B, C. Table A joins to table C while table B joins to tables C. And in the content tab, table C is used to filter out some records . If we select a column of A(The aggregation rule of this column is SUM) in the answers only, then table A, B, C are in the generated sql. If we remove the aggregation rule of this column, then there're only table A, C in the generated sql. We're wondering why this happened, but can't figure out.
    In this case, the relationship between table B and C is many to one. So additional table B in the generated sql may cause wrong sum. And we need the aggregation rule to sum the number. On the other hand, we can't remove the join between table B and C, since other reports need the join.
    Could anyone please provide a solution to remove the additional table B in the generated sql?
    Thanks & regards!
    shell

    Are you using in one report data from tables A, B, C at the same time? If not:
    1) Try separating the logical schema by:
    - Create an alias table for A and B in the physical layer. Then join these 2 tables. No create another Alias table for B (with a slightly different name) and one Alias table for C. Join these 2 tables (B+ and C)
    - Then put everything in your BMM and presentation layer.
    - Test your results. Take into account that if you are working with data in tables A and B you have to use the "fact table" B but if you use data in table C, use instead fact table from table B+.
    Your logical schema would be like A-->B B+-->C
    J.

  • Internal Error - Unable to generate SQL for this Scheduled Workbook

    I am encountering the following error when loading the results of a Scheduled Workbook;
    Internal Error - Unable to generate SQL for this Scheduled Workbook (If you scheduled this workbook using a previous version of Discoverer, please reschedule and re-open)
    This only happens for one of the scheduled workbooks and I am struggling to find an explanation to the problem. There are no reference to database links in the workbook so I can rule that out as a cause.
    Does anyone have any suggestions to what might be causing the problem?
    Many thanks
    Stewart

    Hi,
    The version is OracleBI Discoverer Plus Version 10.1.2.48.18. Do you think this has anything to do with it?If you are on this version, then you have the recommended patches applied.
    Did you try to reschedule the workbook and see if this helps in resolving the issue?
    I would suggest you enable server logging as this may help in collecting more details about the error.
    Note: 403689.1 - How To Generate Discoverer 10g (10.1.2) Session Server Logs In Text Format
    https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=403689.1
    Regards,
    Hussein

  • App Store on the iPad (iOS 6), can you view the Top Charts for iPhone?

    In the App Store on the iPad (iOS 6) can you view the Top Charts for iPhone? It used to be possible in a previous version of iOS.

    disregard post.

  • How to view the source code for Native Method

    hi
    i am using some native methods in to my code ;
    can anybody tell me how to view the source code for the same ;
    nik

    Buy/acquire a C/C++/assembly code disassembler and run the shared library through it.

  • Viewing the license agreement for CS3

    How do I view the license agreement for my old adobe CS3 Master collection (education version) I want to find out if I can put it on my new laptop as well. I've had it on my desktop only up to now. Can't find it in any of the booklets and  papers that came with the software. Can't find it in the help. I can only find the "patent and legal notices." Is that it? It just seemed like a lot of unhelpful gobbledegook to me.

    you're allowed up-to two concurrent installations and activations for your use only but not both at any one moment.

  • Retrieving the milestone dates for an opportunity line item

    Good afternoon experts,
    I am looking for a way to retrieve the milestone dates for an opportunity line item.  I have the line item GUID and need to find out the milestone dates for the item.
    Is there a table or FM that will allow me to do this?
    Thanks,
    Eric

    Hi Eric,
                     I think you can use the FM 'CRM_ORDER_READ'. Pass the guid of the transaction as input to the FM , milestones details you will get from the parameter 'ET_APPOINTMENT'.
    Hope this helps.
    Regards,
    Ruby.

  • ORA-01489 Received Generating SQL for Report Region

    I am new to Apex and I am running into an issue with an report region I am puzzled by. Just a foreword, I'm sure this hack solution will get a good share of facepalms and chuckles from those with far more experience. I welcome suggestions and criticism that are helpful and edifying!
    I am on Apex 4.0.2.00.07 running on 10g, I believe R2.
    A little background, my customer has asked an Excel spreadsheet be converted into a database application. As part of the transition they would like an export from the database that is in the same format as the current spreadsheet. Because the column count in this export is dynmic based on the number of records in a specific table, I decided to create a temporary table for the export. The column names in this temp table are based on a "name" column from the same data table so I end up with columns named 'REC_NAME A', 'REC_NAME B', etc. (e.g. Alpha Record, Papa Record, Echo Record, X-Ray Record). The column count is currently ~350 for the spreadsheet version.
    Because the column count is so large and the column names are dynamic I've run into a host of challenges and errors creating this export. I am a contractor in a corporate environmentm, so making changes to the apex environment or installation is beyond my influence and really beyond what could be justified by this single requirement for this project. I have tried procedures and apex plug-ins for generating the file however the UTL_FILE package is not available to me. I am currently generating the SQL for the query in a function and returning it to the report region in a single column (the user will be doing a text-to-column conversion later). The data is successfully being generated, however, the sql for the headers is where I am stumped.
    At first I thought it was because I returned both queries as one and they were joined with a 'union all'. However, after looking closer, the SQL being returned for the headers is about +10K+ characters long. The SQL being returned for the data is about +14k+. As mentioned above, the data is being generated and exported, however when I generate the SQL for the headers I am receiving a report error with "ORA-01489: result of string concatenation is too long" in the file. I am puzzled why a shorter string is generating this message. I took the function from both pages and ran them in a SQL command prompt and both return their string values without errors.
    I'm hopeful that it's something obvious and noobish that I'm overlooking.
    here is the code:
    data SQL function:
    declare
      l_tbl varchar2(20);
      l_ret varchar2(32767);
      l_c number := 0;
      l_dlim varchar2(3) := '''|''';
    begin
      l_tbl := 'EXPORT_STEP';
      l_ret := 'select ';
      for rec in (select column_name from user_tab_columns where table_name = l_tbl order by column_id)
      loop
        if l_c = 1 then
            l_ret := l_ret || '||' || l_dlim || '|| to_char("'||rec.column_name||'")';
        else
            l_c := 1;
            l_ret := l_ret || ' to_char("' || rec.column_name || '")';
        end if;
      end loop;
        l_ret := l_ret || ' from ' || l_tbl;
      dbms_output.put_line(l_ret);
    end;header sql function:
    declare
      l_tbl varchar2(20);
      l_ret varchar2(32767);
      l_c number := 0;
      l_dlim varchar2(3) := '''|''';
    begin
      l_tbl := 'EXPORT_STEP';
      for rec in (select column_name from user_tab_columns where table_name = l_tbl order by column_id)
      loop
        if l_c = 1 then
            l_ret := l_ret || '||' || l_dlim || '||'''||rec.column_name||'''';
        else
            l_c := 1;
            l_ret := l_ret || '''' || rec.column_name || '''';
        end if;
      end loop;
        l_ret := l_ret || ' from dual';
      dbms_output.put_line(l_ret);
    end;-------
    EDIT: just a comment on the complexity of this export, each record in the back-end table adds 12 columns to my export table. Those 12 columns are coming from 5 different tables and are the product of a set of functions calculating or looking up their values. This is export is really a pivot table based on the records in another table.
    Edited by: nimda xinu on Mar 8, 2013 1:28 PM

    Thank you, Denes, for looking into my issue. I appreciate your time!
    It is unfortunately a business requirement. My customer has required that the data we are migrating to this app from a spreadsheet be exported in the same format, albeit temporarily. I still must meet the requirement. I'm working around the 350 columns by dumping everything into a single column, which is working for the data, however, the headers export is throwing the 01489 error. I did run into the error you posted in your reply. I attempted to work around it with the clob type but eneded up running into my string concatentation error again.
    I'm open to any suggestions at this point given that I have the data. I'm so close because the data is exporting, but because the columns are dynamic, the export does me little good without the headers to go along with it.

  • How to use the generated SQL of "Recommendation"

    Dear Experts,
    I am using KXEN recommendation function. After trained the model, I expose the result in form of HANA SQL. However, I am really have no idea how to make this SQL runnable because there are some subqueries like:
    FROM $Dataset "SPACEIN"
      LEFT OUTER JOIN (SELECT * FROM  WHERE "GRAPH_NAME" = 'Transactions') "PRODUCTS" ON ("PRODUCTS"."KXNODEFIRST" = "SPACEIN".MemberID)
      LEFT OUTER JOIN (SELECT * FROM  WHERE "GRAPH_NAME" = 'Product') "RULES" ON ("PRODUCTS"."KXNODESECOND" = "RULES"."KXNODESECOND")
      LEFT OUTER JOIN (SELECT * FROM  WHERE "GRAPH_NAME" = 'Transactions') "NOTIN" ON ("RULES"."KXNODESECOND_2" = "NOTIN"."KXNODESECOND") AND ("NOTIN"."KXNODEFIRST" = "SPACEIN".MemberID)
    Please pay attention to the red parts. While $Dataset, I assume it should be the data source which is used to train the model, but how to handle the "GRAPH" parts(the next 3 subqueries)? There are something missing after "FROM" clause, what should I fill in here? why the XKEN will generate sucn incomplete codes?
    Thanks for your help!

    Hi Richard,
    To apply a recommendation model, you first need to save your model in your database. (saving the model in the database for such models and for what you want to do is mandatory).
    Once you saved it, you will see many tables starting with "Kx" : KxInfos, KxLinks, KxNodes...
    These tables contains information on the nodes available in the data used, on the links between products.
    Now, if you generate the SQL code for Hana, the name of your KxLinks table should now be used in the SQL code.
    When prompted for $Dataset and $Key, you should specify in place of $Dataset the name of the table on which you want to apply your model. In place of $Key, you should enter the name of the key of this table (e.g. UserId).
    In my case $Dataset =>KAR_UniqueCustomers and $Key=>UserID
    My generated code looks like this :
    FROM KAR_UniqueCustomers "SPACEIN"
    LEFT OUTER JOIN (SELECT * FROM KxLinks1 WHERE "GRAPH_NAME" = 'Transactions') "PRODUCTS" ON ("PRODUCTS"."KXNODEFIRST" = "SPACEIN".UserID)
    LEFT OUTER JOIN (SELECT * FROM KxLinks1 WHERE "GRAPH_NAME" = 'ItemPurchased') "RULES" ON ("PRODUCTS"."KXNODESECOND" = "RULES"."KXNODESECOND")
    Note that your application table must contain:
    - A column with the same name as your users identifier in the training dataset. It contains the list of distinct users (stricly 1 row for each customer id)
    - A column with the same name as your products name in the training dataset. It can contain the name of the same product for all customers.
    I hope you'll make it work !
    Armelle

  • Help: How do you keep the generated source for the servicegen-generated stuff?

    Only the following classes have corresponding Java files:
    <name>Service
    <name>Service_Impl
    <name>ServicePort
    <name>ServicePort_Stub
    I need access to the marshalling code for the DTO java classes (I think there is
    a bug).
    Is there a keepGenerated option?
    Thanks,
    Nick

    I found the problem. I have posted it. See here:
    http://newsgroups2.bea.com/cgi-bin/dnewsweb?cmd=article&group=weblogic.developer.interest.webservices&item=1014&utag=
    Cheers,
    Nick
    "Nick Minutello" <[email protected]> wrote:
    >
    >
    Actually, you are right - the source for the codecs can be found in the
    webservice
    WAR. However, I have a funny feeling that the two are not the same (ie,
    there is
    a difference between the classes in the war, the classes in the client jar
    generated
    by servicegen and the client jar generated by clientgen. I am getting some
    odd behaviour,
    and I havent gotten to the bottom of it yet.
    It seems for the Data Transfer Objects, it makes a difference whether you
    use the
    generated classes or the originals (I am using the useServerTypes="True"
    option in
    the client element of the servicegen task.
    -Nick
    "Neal Yin" <[email protected]> wrote:
    Actually, the behavior right now should be "always keepgenerated". Canyou
    post your EJB/javaclass? We can check it out.
    -Neal
    "Nick Minutello" <[email protected]> wrote
    in message news:[email protected]...
    Actually, I think this is critical that this is done asap.
    It is impossible to debug errors such as this without it!
    java.lang.NoSuchMethodError
    atcom.x.bond.view.BondDetailDTOCodec.typedInvokeSetter(BondDetailDTOCodec.java
    :1214)
    atcom.x.bond.view.BondDetailDTOCodec.invokeSetter(BondDetailDTOCodec.java:964)
    Any chance of a timeframe?
    I dont particularly want to go and use Axis unless I have to... but Iseem
    to be
    spending a lot of time debugging this weblogic soap implementation...
    Thanks.
    Regards,
    Nick
    "Neal Yin" <[email protected]> wrote:
    Sorry, we don't have a keepgenerated option yet. We are adding one.
    Could you post what bug you think we might have? We can help you to
    pass
    though it.
    Thanks,
    -Neal
    "Nick Minutello" <[email protected]>
    wrote
    in message news:[email protected]...
    Only the following classes have corresponding Java files:
    <name>Service
    <name>Service_Impl
    <name>ServicePort
    <name>ServicePort_Stub
    I need access to the marshalling code for the DTO java classes (I
    think
    there is
    a bug).
    Is there a keepGenerated option?
    Thanks,
    Nick

  • How to view the source code for a Native Method

    hi
    i am using a Native method Math.pow() ;
    but since it is a native method it is taking more time to execute ;
    since this method i had to call at least 700 times it is affecting the performance of the application ;
    so i am thinking to write the user defined method based on the logic which has been implemented in the pow() method ;
    for that i need to know in which .c file this method has been written ;
    or else it can not be viewed at all ( if it is in the .dll)
    can u help me out
    nik

    Hi!
    Here is part of StrictMath.java code.
    * The class <code>StrictMath</code> contains methods for performing basic
    * numeric operations such as the elementary exponential, logarithm,
    * square root, and trigonometric functions.
    * <p>
    * To help ensure portability of Java programs, the definitions of
    * many of the numeric functions in this package require that they
    * produce the same results as certain published algorithms. These
    * algorithms are available from the well-known network library
    * <code>netlib</code> as the package "Freely Distributable
    * Math Library" (<code>fdlibm</code>). These algorithms, which
    * are written in the C programming language, are then to be
    * understood as executed with all floating-point operations
    * following the rules of Java floating-point arithmetic.
    * <p>
    * The network library may be found on the World Wide Web at:
    * <blockquote><pre>
    * http://metalab.unc.edu/
    * </pre></blockquote>
    * <p>
    * The Java math library is defined with respect to the version of
    * <code>fdlibm</code> dated January 4, 1995. Where
    * <code>fdlibm</code> provides more than one definition for a
    * function (such as <code>acos</code>), use the "IEEE 754 core
    * function" version (residing in a file whose name begins with
    * the letter <code>e</code>).
    * @author unascribed
    * @version 1.9, 02/02/00
    * @since 1.3

  • How to examine the generated SQL statement in Receiver JDBC Adapter

    I have been searching this forum how to display te generated sql statement (by the jdbc receiver adapter).
    The only suggestion is to use RWB, but I was unable to find any details about how to do so.
    Any help is appreciated

    Hi,
    To add, u can see the SQL Statements in Audit log of RWB.
    Select Message Monitoring-> Adapter Engine. choose ur entry and click on Details option button, u can see the SQL Statements in Audit Log.
    Regards,
    Sudharshan
    Message was edited by:
            Sudharshan Aravamudan

  • Two log-ons before viewing the PO document for approval.

    Hi Workflow Experts,
    We have a problem wherein the system requires the PO approver to log-on twice before viewing the PO for approval.
    The customer requirement is to approve a PO using workflow. Below is the brief description of the scenario;
    DESCRIPTION OF REQUIREMENT:
    Step 1.  A purchaser creates a Purchase Order and saves it.
    Step 2. An email notification is received by the approver. In the notification, an attachment is double-clicked to activate the
                log-on screen (this is an OS log-on). The approver logs into the OS log-on screen then the system directs the approver
                the PO for approval.
    PROBLEM:
             In step 2, another log-on screen (this is the SAP log-on) appears after the approver was logged in the OS log-on.
            HOW CAN I REMOVE THE SECOND LOG-ON SO THAT IT WILL COMPLY WITH THE REQUIREMENT OF ONLY ONE LOG-
            ON.
    I hope you can help me soon. THANK YOU VERY MUCH IN ADVANCE...

    For this you have use Single Sign On. This the business needs to decide. If the user is only facing the issue then the Single Sign On is not working. Search the SDN with Single Sign on and probably ask the appropriate team to allow this user to do SSO.
    Thanks
    Arghadip

Maybe you are looking for

  • HP Laserjet 2840 Mispick in the ADF, yet roller and seperator work fine

    We have a LaserJet 2840. Everytime we try use the adf tray we get the error message: Mispick We followed the manual and cleaned the rollers and seperaor - issue persisted so we replaced rollers and seperator. We still get the "mispick" message. Paper

  • DTEXEC does not fail when SSIS package fails

    I need to run my SSIS 2012 packages through the catalog with DTEXEC. This works very well, except that if my SSIS package fails, DTEXEC does not fail. I need DTEXEC to fail, so my scheduler knows that there is an error. I use the following command: d

  • Sign In is not working

    I just received an Envy x2 11-g010nr and the screen says to press and hold the windows button and then press the power button to sign in. I have tried this numerous times and it doesn't take me to the sign in???? Any help would be greatly appreciated

  • Dependent Demand Planning of commponents

    Hi Following is the BOM structure - F (Finished Good) - > S1 (1st Intermediate) - > S2 (2st Intermediate) - > R (Raw Material) I am trying to plan the finished good (F) in an external system (like say xLPO / ECC) and transfer the dependent demand of

  • Playlist again... but differ

    I've seen alot of playlist posts about playlists in the Creative MediaSource, but I have yet to see one pertaining to the mp3 player. So here it mine. I've been looking around in my documentation for my MuVo TX FM (GB) and of course listening to musi