Batch Reading with Custom SQL

Hello,
Queries
1. Is it possible to use Batch Reading in conjunction with Custom Stored Procs/ SQL?
2. Is it possible to map an attribute to a SQL expression (like in Hibernate we have formula columns mapped using the formula* property)?
Background
1. We use Toplink 11g (11.1.1.0.1) (not EclipseLink) in our application and are controlling mapping using XML files (not annotations).
2. We are migrating a legacy application with most of its data retreival logic present in stored procedures to Java.
3. I am effectively a newbie to Toplink.
Scenario
1. We have a deep class heirarchy with ClassA+ at the following having a one-to-many relation with ClassB+ and ClassB+ having a one-to-many relation with ClassC+ and so on and so forth.
2. For each of these classes the data retreival logic is present in stored procedures (coming from the legacy application) containing not so simple queries.
3. Also there are a quite a few attributes that actually represent computed values (computed and returned from the stored procedure). Also the logic for computing the values are not simple either.
4. So to make things easy we configured toplink to use the stored procedures to retreive data for objects of ClassA+, ClassB+ and ClassC+.
5. But since the class heirarchy was deep, we ended up firing too many stored procedure calls to the database.
6. We thought we could use the Batch Reading feature to help with this, but I have come across documentation that says that it wont work if you override toplink's queries with stored procedures.
7. I wrote some sample code to determine this and for the heirarchy shown above it uses the speicifed Custom procedure (I also tried replacing the stored procs with custom SQL, but the behavior is the same) for ClassA+ and ClassB+, but for ClassC+ and below it resorts to its own generated SQL.
8. This is a problem because the generated SQL contains the names of the computed columns which is not present in the underlying source tables.
Thanks
Arvind

Batch reading is not supported with custom SQL or stored procedures.
Join fetching is though, so you may wish to investigate that (you need to ensure you return the correct data from the stored procedure).
James : http://www.eclipselink.org

Similar Messages

  • Incorrectly read attributes with custom SQL query

    Hi folks,
    I'm trying to read in a random sampling of records from a table, so I tried:
    ReadAllQuery q = new ReadAllQuery();
    q.setReferenceClass(Foo.class);
    q.addPartialAttribute("bar");
    q.setSQLString("select * from Foo sample(10)");
    q.useCursoredStream(100,100, new ValueReadQuery("select count(*) from Foo sample(10)"));
    This all worked fine, the Foos were retrieved, but after the first dozen or so, all the "bar" attributes were null, which they should not be. This only occurs when using a custom SQL string. I tried bringing back all the objects (i.e., without using setSQLString) and examining them and all the "bar"s were present. But when I use setSQLString the attributes don't get read correctly.
    Can anyone tell me what I'm doing wrong? Is there a better way to do it?
    Thanks,
    Bryn

    Okay, here's the actual code:
    ReadAllQuery q = new ReadAllQuery();
    q.setReferenceClass(ActivityCenter.class);
    boolean useCustom = true;
    if (useCustom) {
    q.setSQLString("SELECT SetupDt, TerminationDt, ReinstateDt, IID, upDt, ID, " +
    "Stat, SubType, Acct FROM ACtr");
    q.useCursoredStream(100,100, new ValueReadQuery("select count(IID) from actr"));
    } else {
    q.useCursoredStream(100,100);
    session.logMessages();
    activityCenters = (CursoredStream) session.executeQuery(q);
    ActivityCenter ac = (ActivityCenter) activityCenters.read();
    if (ac.getAccount() == null) {
    System.err.println(ac.getID() + ": Oops!");
    } else {
    System.err.println(ac.getID() + ": Has account!");
    System.exit(0);
    So, everything about this program is the same - how the mapping is done, how things get initialized, everything. The only difference is whether I use a custom SQL query or not. Here's what it looks like when I run it both ways:
    First, custom:
    DatabaseSession(2433702)--Connection(393272)--SELECT SetupDt, TerminationDt, ReinstateDt, IID, upDt, ID, Stat, SubType, Acct FROM ACtr
    1.1: Oops!
    Now, without the custom SQL:
    DatabaseSession(393272)--Connection(7896086)--SELECT SetupDt, TerminationDt, ReinstateDt, IID, upDt, ID, Stat, SubType, Acct FROM ACtr
    DatabaseSession(393272)--Connection(7896086)--SELECT
    //bunch of fields from the Acct attribute...
    FROM Acct WHERE (IID= 'ffbe5c47f3ea762cfd50fbe9e6d6de6')
    1.1: Has account!
    Notice that the first query on Actr is identical in both cases. Also note there are no null Actr.acct fields in the database:
    SQL> select * from actr where actr.acct is null;
    no rows selected
    SQL>
    This is on Oracle 9i, with the thin jdbc driver.
    Also, I was curious that you asked if I was using partial attributes - when I try to add a partial attribute when using the custom SQL string, I get exceptions like this one:
    java.lang.ClassCastException: oracle.toplink.internal.queryframework.CallQueryMechanism
    at oracle.toplink.queryframework.ObjectLevelReadQuery.initializeDefaultBuilder(Unknown Source)
    at oracle.toplink.queryframework.ObjectLevelReadQuery.getExpressionBuilder(Unknown Source)
    at oracle.toplink.queryframework.ObjectLevelReadQuery.addPartialAttribute(Unknown Source)
    at jenkon.magellan.util.MLMulator.getActivityCenters(MLMulator.java:782)
    at jenkon.magellan.util.MLMulator.run(MLMulator.java:897)
    at jenkon.magellan.util.MLMulator.main(MLMulator.java:943)
    Thanks,
    Bryn

  • Null Pointer Exception when working with Custom Sql

    I viewed the video on adding Custom Sql and everything seemed to work fine with regards to adding it to my report. However, I get a null pointer exception when I add a field from the Custom Sql table to the report and try to run the report including if I try to View Sql. There doesn't seem to be a stacktrace that I can show.
    BTW... this was a work around for not being able to use Stored Procedures... I have killed 2 days now working on different ways to work around the Stored Procedure thing
    Thanks,
    Steve

    Hi,
    I'm trying to use a custom sql in my report. I could successfully create a custom SQL and add it to my report in the designer. But I'm getting NullPointer exception when I try to see the results in Preview. Can someone please help me resolve this issue.
    Thanks!

  • Working with Custom SQL Using Descriptor Query Manager Queries

    Hi All,
    I am Working on Descriptor Query Manager Queries
    Configuring Custom SQL Using Java and Workbench
    Using Java I wrote a static method as in the code given below.
    public static void insertEmployee(ClassDescriptor descriptor){
    descriptor.getQueryManager().setInsertSQLString(
    "insert into EMPLOYEE (EMP_ID, EMP_NAME, EMP_JOB, SAL, DEPTNO) values (#EMP_ID, #EMP_NAME, #EMP_JOB, #SAL, #DEPTNO)"
    I wrote a insert SQL Query in the custom SQL tab of the Toplink workbench .
    Using java and Using Toplink Workbench I had a problem how to call this insert query in the sessionEJBBean .
    Can any one suggest me in this regard.
    Thanks in advance
    regards,
    Satish

    What is the problem you are experiencing?
    Normally you can just execute the query by calling
    'executeQuery(queryName, domainclass) on the session.
    See also
    http://www.oracle.com/technology/products/ias/toplink/
    doc/10131/main/_html/qrybas003.htm#BCFIBGGJ
    Just out of curiosity: why do you need a custom SQL
    to insert something? Can't you use persist()?
    regards,
    LonnekeOr even UnitOfWork ? Why go down the route of using custom inserts to insert objects unless you have some business logic that Toplink's UnitOfWork API cannot provide ?

  • Reports with custom SQL

    Hi all,
    is there a way to find out which Webi reports are running custom SQL?
    Basically we want to find this out without opening each report in edit mode.
    Thanks.

    Hi George,
    I encounter this issue OFTEN because I add custom SQL in many queries. The problem for me, as you may have encountered yourself, is when I go in and do a minor tweak a query by adding a field, the custom SQL is flushed away.
    Since WebI doesn't show this important information on the Document List, I have made it a habit to add a note to the document properties. (Right click on the document title and select Properties.) The properties will appear directly beneath the title on the Document List.
    My Document List looks like this:
    That's the solution I came up with. It reminds me that if I open a document to modify it in any way, I need to be careful not to lose my custom SQL code.
    Good luck,
    Marvin

  • Batch input with customer fields

    I have to create a batch input process that post documents through FB01L.
    As the lines of each document have been added customer fields, when I tried to go from one line of the document to the following one, a screen appears when I have to introduce some data (two of the fields are mandatory). I had solved these issue by ending with a "Call Transaction" with option "No batch input" = 'X'. This way, these customer fields appear directly in the screen, and not in a popup screen, so the batch input process was very easy.
    But, I have been told that I don't have to use "Call transacion", I have to create a session, and the user will post the document by launching it in SM35. This way, even adding "No batch input" = 'X' in the options of "BDC_INSERT", the popup screen appears when running the session.
    So, is there any way of running the session in SM35 with the option "No batch input"? If it's not possible, which is the best option of dealing with these popup screens? I have been able to go from one dynpro to another, and to open that popup screen, but I can't fill the mandatory fields.
    I hope you understand my problem, please excuse my English.

    Hi ,
    Go to the T code Mass , put the object as KNA1 for customers .
    Execute the same and you will find all the option available for making mass changes .
    In case if your requirement does not meet here then you will have to go for a LSMW .
    Regards ,
    Dewang T

  • How to Update a particular column with custom SQL in JHeadstart

    Hi Friends,
    I was trying to keep an update SQL statement in VO (missing right Parenthesis exception is thrown)
    Please to let you know about my Requirement.
    I have a column in Sales_Person Table "TARGET' and in Sales_Admin Table "INCENTIVE"
    So Admin will allot a particular Amount in Incentive column for all the sales. (i.e globally for one and all)
    When a Sales_Person enter a target of some number say 80 in the Target field.
    while saving i want to deduct the target from incentive and the result must be update in Incentive column.
    can you please suggest me where should i keep this functionality
    Can you please help me out in solving this issue.
    Thanks in advance.
    Rahul

    Rahul,
    You can do this in the doDMl method of your Entity Object.
    See this white paper:
    http://www.oracle.com/technology/products/jdev/collateral/papers/10131/businessrulesinadfbctechnicalwp.pdf
    If you have follow-up questions, please use the JDeveloper forum, since your question is not related to JHeadstart.
    Steven Davelaar,
    JHeadstart Team.

  • Custom SQL in Webi reports reverting to gerenated SQL when exporting univ

    Hi all -
    I've searched the forum and didn't find any posts of the sorts regarding my issue.  I was wondering if anyone has experienced the following behavior in WebI XI R2.
    We have a few reports that run on a scheduled daily basis. The report writer created these reports with custom SQL to use the current date in the select criteria.  But when we make changes to the specifc universe that these reports use and export that universe from Designer or import it from QA to Production via Import Wizard, these reports all revert to the initial generated SQL.  The report writer need to go into each one of these reports to set them back to use custom SQL.  I should also add that the generated SQL of these reports are fairly complicated.
    We have about 20 of these report and right now we are on a monthly deploy cycle.   So it's becoming an issue to get these reports set back to use custom SQL every month.
    Has anyone come across an issue like this?  If so, was there a resolution?
    Thanks in advance for any advice on this.

    Walter- Thanks.  Doing the universe solution work around is my last resort which will probably be the case.
    It does seem like this is some sort of bug in WEBI because it only happens in certain cases - complex custom sql.  We have a other reports with custom sql that have simple select SQL that don't show this behavior when the universe is exported.  So I was looking to see if anyone else has encountered this and knew if there was a patch or something that would address this issue.

  • Change sourcce - Effect on custom SQL

    Dear Experts,
    I have a webi report which uses universe with custom sql . somehow we lost the universe . We have a similar universe in the server . I have a doubt here.
    What happens to the "Custom SQL" when you change the source to the similar universe using "Change source" option? Will the custom sql still work or it reverts back to original sql form universe?
    Thanks,
    Rajesh.

    Hi Rajesh,
    I think you can verify is by creating a similar report(same dimension,measures etc) based on that universe you have on your server compare the query from both reports and do the changes that are in custom sql then compare both reports in terms of data.
    if the data looks similar, then yes you can use that universe and edit the custom query and if not then check for the universe.
    Hope it helps!
    Cheers,
    Shardendu Pandey

  • Schema/database name, custom sql folders

    We have created a single EUL for all business areas and when you create a simple or complex folder you can set the database attribute to default. Is there any way to do this with custom sql folders?
    Thanks, Jennifer

    oops, I don't think I gave enough information. In the custom folder the code is
    select c1, c2, c3 from schema_name.some_table
    (Discoverer refers to 'schema_name' as a 'database attribute' in simple & complex folders). We want to run a report, based on the custom folder, as a different user, attached to a different schema eg schema_b and get the results from schema_b. We can do this with simpe & complex folders by setting the 'database attribute' to <default> but can't find a way with custom folders.
    Thx Jennifer

  • How to specify custom SQL in polling db adapter with logical delete option

    Hi all,
    I am writing a SOA composite app using JDeveloper SOA Suite 11.1.1.4 connecting to a SQL Server db using a polling DB Adapter with the logical delete option to send data to a BPEL process.
    I have requirements which go beyond what is supported in the JDeveloper UI for DB Adapter polling options, namely:
    * update more than one column to mark each row read, and
    * specify different SQL for the logical delete operation based on whether bpel processing of the data polled was successful or not.
    A complicating factor is that the polling involves two tables. Here is my full use-case:
    1) Polling will select data derived from two tables: e.g. 'headers' and 'details' simplified for this example:
    table: headers
    hid - primary key
    name - data label
    status - 'unprocessed', 'processed', or 'error'
    processedDate - null when data is loaded, set to current datetime when row is processed
    table: details
    hid - foreign key pointed at header.hid
    attr - data attribute name
    value - value of data attribute
    2) There is a many:1 relationship between detail and header rows through the hid columns. The db adapter polling SELECT shall return results from an outer join consisting of one header row and the associated detail rows where header.status = 'unprocessed' and header.hid = details.hid. (This is supported by the Jdeveloper UI)
    3) The polled data will be sent to be processed by a bpel process:
    3.1) If the bpel processing succeeds, the logical delete (UPDATE) operation shall set header.status = 'processed', and header.processedDate = 'getdate()'.
    3.2) If bpel processing fails (e.g. hits a data error while processing the selected data) the logical delete (UPDATE) operation shall set header.status = 'failed', header.processedDate = 'getdate()', and header.errorMsg = '{some text returned from bpel}'.
    Several parts of #3 are not supported by the JDeveloper UI: updating multiple columns to mark the row processed, using getdate() to populate a value of one of those column updates, doing different update operations based on the results of the BPEL processing of the data (success or error), and using data obtained from BPEL processing as a value of those column updates (error message).
    I have found examples which describe specifying custom SQL using the polling delete option to create a template then modifying the toplink file(s) to specify custom select and update SQL to implement a logical delete. (e.g. http://dlimiter.wordpress.com/2009/11/05/advanced-logic-in-oracle-bpel-polling-database-adapter/ and http://myexperienceswithsoa.blogspot.com/2010/06/db-adapter-polling-tricks.html). But none of them match what I've got in my project, in the first case because maybe because I'm using a higher version of JDeveloper, and in the second I think because in my case two tables are involved.
    Any suggestions would be appreciated. Thanks, John

    Hi John,
    You've raised a good scenario.
    First of all let me say that the purpose of the DB polling transaction, is to have an option to initiate a process from a DB table/view and not to update multiple fields in a table (or have other complex manipulation on the table).
    So, when choose to update a field in a record, after reading it, you are "telling" the engine not to poll this record again. Sure, i guess you can find a solution/workaround for it, but I don't think this is the way....
    The question now is what to do?
    You can have another DB adapter where you can update the data after finishing the process. In that case, after reading the data (on polling transaction) - update the header.status = 'processed' for example, and after processing the selected data update the rest of the fields.
    Hope it make some sense to you.
    Arik

  • Performance for join 9 custom table with native SQL ?

    Hi Expert,
    I need your opinion regarding performance to join 9 tables with native sql. Recently i have to tunning some customize extraction cost  report. This report extract about 10 million cost of material everyday.
    The current program actually, try to populate the condition data and insert into customize table and join all the table to get data using native sql.
    SELECT /*+ ordered use_hash(mst,pg,rg,ps,rs,dpg,drg,dps,drs) */
                mst.werks, ....................................
    FROM
                sapsr3.zab_info mst,
                sapsr3.zab_pc pg,
                sapsr3.zab_rc rg,
                sapsr3.zab_pc ps,
                sapsr3.zab_rc rs,
                sapsr3.zab_g_pc dpg,
                sapsr3.zab_g_rc drg,
                sapsr3.zab_s_pc dps,
                sapsr3.zab_s_rc drs
            WHERE mst.zseq_no = :p_rep_run_id
            AND mst.werks = :p_werks
            AND mst.mandt = rg.mandt(+)
            AND mst.ekorg = rg.ekorg(+)
            AND mst.lifnr = rg.lifnr(+)
            AND mst.matnr = rg.matnr(+)
            ...............................................   unitl all table (9 tables)
            AND ps.mandt = dps.mandt(+)
            AND ps.knumh = dps.knumh(+)
            AND ps.zseq_no = dps.zseq_no(+)
            AND COALESCE (dps.kbetr, drs.kbetr, dpg.kbetr, drg.kbetr) <> 0
    It seems the query ask for database to using hashed table. would that be it will burden the database ? and impacted to others sap process ?
    Please advise
    Thank You and Best Regards

    you can only argue coming from measurements and that is not the case.
    Coming from the code, I see only that you do not understand it at all, so better leave it as it is. It is not a hash table, but a hash join on these table.

  • Discoverer Custom SQL Report with Group By,Having

    Hi all,
    I'm working with Discoverer 4.1.41.05 and retrieve data from Oracle HRMS.
    I want to create a report based on a custom SQL Folder in Discoverer Administration to solve the following scenario:
    SELECT EMPLOYEE_NUMBER FROM PER_ALL_PEOPLE_F
    MINUS
    SELECT EMPLOYEE_NUMBER FROM PER_ALL_PEOPLE_F,PER_EVENTS, PER_BOOKINGS
    GROUP BY EMPLOYEE_NUMBER
    HAVING SUM(PER_EVENTS.ATTRIBUTE1) >= 10)
    The problem is when I run the sql from SQL*Plus is working fine producing the correct result.
    When I create the Custom Folder and create a Discoverer report based on it I retrieve all employees since the second select statement retrieve no rows!
    Does anyone has any idea how I can solve this problem or if there is another way to create this scenario?
    Any feedback is much appreciated.
    Thanking you in advance.
    Elena

    Hi Elena,
    I think you are missing some joins, you could try something like:
    SELECT EMPLOYEE_NUMBER FROM PER_ALL_PEOPLE_F p
    WHERE NOT EXISTS
    (SELECT NULL
    FROM PER_EVENTS e, PER_BOOKINGS b
    WHERE b.person_id = p.person_id
    AND e.event_id = b.event_id
    GROUP BY b.person_id
    HAVING SUM(e.ATTRIBUTE1) >= 10)
    Rod West

  • Read files in directories with PL/SQL

    Hi
    Is there any way to read files (just the names) in a directory just using PL/SQL? I mean, finding out the files with .dat extension in a given directory, for example.
    Thanks

    I don't know Forms but I know SQL*Plus and Oracle database. Normally PL/SQL running on the database can only access files on the host where the database instance is running even if you start PL/SQL with a SQL*Plus connection from another host.
    PL/SQL runs only the database instance not on the client side even if you start the PL/SQL code from a remote connection with SQL*Plus.

  • Read consistency in query with pl/sql functions

    Not sure if this is a bug or feature, but a query containing a user-defined pl/sql function does not include tables accessed within the pl/sql function in the read consistent view of data, eg
    select myfunc from tableA
    myfunc is a stored function that queries tableB and returns a value
    If a change to tableB is committed in another session during fetch phase of select statement, then fetched rows reflect the changes. The database does not recognise tables accessed in the plsql function as being part of the query.
    This happens in 7.3.4 and 8.1.6. Don't have 9i so can't tell.
    Anyone know if this is a bug or feature?
    Aside: you can also drop the plsql function whilst the fetch is running. It will kill the fetch. No DDL lock taken on the plsql function whilst select is running! Seems wrong.

    I don't know Forms but I know SQL*Plus and Oracle database. Normally PL/SQL running on the database can only access files on the host where the database instance is running even if you start PL/SQL with a SQL*Plus connection from another host.
    PL/SQL runs only the database instance not on the client side even if you start the PL/SQL code from a remote connection with SQL*Plus.

Maybe you are looking for

  • Error 'Program not registered' while testing RFC

    Hi I have created a RFC with type T. When I am testing the same through SM59, I am getting error: Logon     Connection Error Error Details     Error when opening an RFC connection Error Details     ERROR: program <program name>not registered Error De

  • How can I Tell if my Snow Leopard disk is a family pack

    Last year my daughter's macbook HD crashed. She's going to school in FL, got a new HD, and I told her to upgrade the OS and get a Snow Leopard Family Pack, and send it home. I'm finally getting around to installing it and it won't work. I get to the

  • Remaining questions while evaluating JavaFX for a new project

    Dear forum members: currently I am evaluating the possibilities of next-generation GUI technologies, such as JavaFX, Silverlight and Flash/Flex, for a new project. To get a basic understanding of JavaFX's concepts, I worked through the available onli

  • Ipod wont turn on, reset, dfu, or connect to CPU.

    A few nights ago, I plugged my ipod into itunes and it recognized it as an Iphone. I tried to restore it but it said the firmware wasn't compatible. I reset it and now it wont turn on, won't reset, wont go into dfu mode, and every time I plug it into

  • Pre-Built EBS Analytics with BAM and EBS adapters on FM

    Are there any pre-built business analytics dashboards/ reports installable in BAM SOA 11g for Oracle EBS 11.5.10? If so, where do I download and what other components are required to publish these? Thanks Edited by: user6739500 on May 21, 2010 3:08 P