Does Toplink Database query use PreparedStatement by default

Hi
I would like to know when we invoke a ReadAllQuery/ReportQuery by constructing a toplink expression, does toplink internally translate that into a PreparedStatement, bind the variables to the specified values and then execute the actual DB query ?

A prepared statement can be cache, and re-executed with different bind parameters. This saves the database from having to re-parse the SQL, so can provide the database a performance advantage. The degree of this benefit varies with database and version, but it can be significant in cases.
By default TopLink uses prepared statements, but does not cache them. This is mainly because with JEE DataSources TopLink does not have control over the connection, so the DataSource must perform the statement caching. Most DataSources support this (but most not by default). Some older JDBC drivers may also have issues with caching statements.
TopLink fully supports caching prepared statements, this can easily be enabled as an option on the TopLink DatabaseLogin.setShouldCacheAllStatements(), or sessions.xml, or JPA persistence.xml property "toplink.jdbc.cache-statements" or "eclipselink.jdbc.cache-statements".

Similar Messages

  • Does a Database Control use Prepared Statements?

    When I add a method to a database control I have to supply the SQL for that method. Under the covers does the database control convert that SQL into a prepared statement? Is this documented anywhere?
    Thanks for the help!
    Rob

    Rob,
    it seems not be documented, but the Database Control always uses a PreparedStatement internally.
    -Kai

  • Why does not my query use an index?

    I have a table with some processed rows (state: 9) and some unprocessed rows (states: 0,1,2,3,4).
    This table has over 120000 rows, but this number will grow.
    Most of the rows are processed and most of them also contain a group id. Number of groups is relatively small (let's assume 20).
    I would like to obtain the oldest some_date for every group. This values has to be outer joined to a on-line report (contains one row for each group).
    Here is my set-up:
    Tested on: 10.2.0.4 (Solaris), 10.2.0.1 (WinXp)
    drop table t purge;
    create table t(
      id number not null primary key,
      grp_id number,
      state number,
      some_date date,
      pad char(200)
    insert into t(id, grp_id, state, some_date, pad)
    select level,
         trunc(dbms_random.value(0,20)),
            9,
            sysdate+dbms_random.value(1,100),
            'x'
    from dual
    connect by level <= 120000;
    insert into t(id, grp_id, state, some_date, pad)
    select level + 120000,
         trunc(dbms_random.value(0,20)),
            trunc(dbms_random.value(0,5)),
            sysdate+dbms_random.value(1,100),
            'x'
    from dual
    connect by level <= 2000;
    commit;
    exec dbms_stats.gather_table_stats(user, 'T', estimate_percent=>100, method_opt=>'FOR ALL COLUMNS SIZE 1');
    Tom Kyte's printtab
    ==============================
    TABLE_NAME                    : T
    NUM_ROWS                      : 122000
    BLOCKS                        : 3834I know, this could be easily solved by fast refresh on commit materialized view like this:
    select
      grp_id,
      min(some_date),
    from
      t
    where
      state in (0,1,2,3,4)
    grpup by
      grp_id;+ I have to create log on (grp_id, some_date, state)
    Number of rows with active state will be always relatively small. Let's assume 1000-2000.
    So my another idea was to create a selective index. An index which would contain only data for rows with an active state.
    Something like this:
    create index fidx_active on t ( 
      case state 
        when 0 then grp_id
        when 1 then grp_id
        when 2 then grp_id
        when 3 then grp_id
        when 4 then grp_id
      end,
      case state
        when 0 then some_date
        when 1 then some_date
        when 2 then some_date
        when 3 then some_date
        when 4 then some_date
      end) compress 1; so a tuple (group_id, some_date) is projected to tuple (null, null) when the state is not an active state and therefore it is not indexed.
    We can save even more space by compressing 1st expression.
    analyze index idx_grp_state_date validate structure;
    select * from index_stats
    @pr
    Tom Kyte's printtab
    ==============================
    HEIGHT                        : 2
    BLOCKS                        : 16
    NAME                          : FIDX_ACTIV
    LF_ROWS                       : 2000 <-- we're indexing only active rows
    LF_BLKS                       : 6 <-- small index: 1 root block with 6 leaf blocks
    BR_ROWS                       : 5
    BR_BLKS                       : 1
    DISTINCT_KEYS                 : 2000
    PCT_USED                      : 69
    PRE_ROWS                      : 25
    PRE_ROWS_LEN                  : 224
    OPT_CMPR_COUNT                : 1
    OPT_CMPR_PCTSAVE              : 0Note: @pr is a Tom Kyte's print table script adopted by Tanel Poder (I'm using Tanel's library) .
    Then I created a query to be outer joined to the report (report contains a row for every group).
    I want to achieve a full scan of the index.
    select
      case state -- 1st expression
        when 0 then grp_id
        when 1 then grp_id
        when 2 then grp_id
        when 3 then grp_id
        when 4 then grp_id
      end grp_id,
      min(case state --second expression
            when 0 then some_date
            when 1 then some_date
            when 2 then some_date
            when 3 then some_date
            when 4 then some_date
          end) as mintime
    from t 
    where
      case state --1st expression: at least one index column has to be not null
        when 0 then grp_id
        when 1 then grp_id
        when 2 then grp_id
        when 3 then grp_id
        when 4 then grp_id
      end is not null
    group by
      case state --1st expression
        when 0 then grp_id
        when 1 then grp_id
        when 2 then grp_id
        when 3 then grp_id
        when 4 then grp_id
      end;-------------
    Doc's snippet:
    13.5.3.6 Full Scans
    A full scan is available if a predicate references one of the columns in the index. The predicate does not need to be an index driver. A full scan is also available when there is no predicate, if both the following conditions are met:
    All of the columns in the table referenced in the query are included in the index.
    At least one of the index columns is not null.
    A full scan can be used to eliminate a sort operation, because the data is ordered by the index key. It reads the blocks singly.
    13.5.3.7 Fast Full Index Scans
    Fast full index scans are an alternative to a full table scan when the index contains all the columns that are needed for the query, and at least one column in the index key has the NOT NULL constraint. A fast full scan accesses the data in the index itself, without accessing the table. It cannot be used to eliminate a sort operation, because the data is not ordered by the index key. It reads the entire index using multiblock reads, unlike a full index scan, and can be parallelized.
    You can specify fast full index scans with the initialization parameter OPTIMIZER_FEATURES_ENABLE or the INDEX_FFS hint. Fast full index scans cannot be performed against bitmap indexes.
    A fast full scan is faster than a normal full index scan in that it can use multiblock I/O and can be parallelized just like a table scan.
    So the question is: Why does oracle do a full table scan?
    Everything needed is in the index and one expression is not null, but index (fast) full scan is not even considered by CBO (I did a 10053 trace)
    | Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    |   1 |  HASH GROUP BY     |      |      1 |     85 |     20 |00:00:00.11 |    3841 |
    |*  2 |   TABLE ACCESS FULL| T    |      1 |   6100 |   2000 |00:00:00.10 |    3841 |
    Predicate Information (identified by operation id):
       2 - filter(CASE "STATE" WHEN 0 THEN "GRP_ID" WHEN 1 THEN "GRP_ID" WHEN 2
                  THEN "GRP_ID" WHEN 3 THEN "GRP_ID" WHEN 4 THEN "GRP_ID" END  IS NOT NULL)Let's try some minimalistic examples. Firstly with no FBI.
    create index idx_grp_id on t(grp_id);
    select grp_id,
           min(grp_id) min
    from t
    where grp_id is not null
    group by grp_id;
    | Id  | Operation             | Name       | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
    |   1 |  HASH GROUP BY        |            |      1 |     20 |     20 |00:00:01.00 |     244 |    237 |
    |*  2 |   INDEX FAST FULL SCAN| IDX_GRP_ID |      1 |    122K|    122K|00:00:00.54 |     244 |    237 |
    Predicate Information (identified by operation id):
       2 - filter("GRP_ID" IS NOT NULL)This kind of output I was expected to see with FBI. Index FFS was used although grp_id has no NOT NULL constraint.
    Let's try a simple FBI.
    create index fidx_grp_id on t(trunc(grp_id));
    select trunc(grp_id),
           min(trunc(grp_id)) min
    from t
    where trunc(grp_id) is not null
    group by trunc(grp_id);
    | Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    |   1 |  HASH GROUP BY     |      |      1 |     20 |     20 |00:00:00.94 |    3841 |
    |*  2 |   TABLE ACCESS FULL| T    |      1 |   6100 |    122K|00:00:00.49 |    3841 |
    Predicate Information (identified by operation id):
       2 - filter(TRUNC("GRP_ID") IS NOT NULL)Again, index (fast) full scan not even considered by CBO.
    I tried:
    alter table t modify grp_id not null;
    alter table t add constraint trunc_not_null check (trunc(grp_id) is not null);I even tried to set table hidden column (SYS_NC00008$) to NOT NULL
    It has no effect, FTS is still used..
    Let's try another query:
    select distinct trunc(grp_id)
    from t
    where trunc(grp_id) is not null
    | Id  | Operation             | Name        | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    |   1 |  HASH UNIQUE          |             |      1 |     20 |     20 |00:00:00.85 |     244 |
    |*  2 |   INDEX FAST FULL SCAN| FIDX_GRP_ID |      1 |    122K|    122K|00:00:00.49 |     244 |
    Predicate Information (identified by operation id):
       2 - filter("T"."SYS_NC00008$" IS NOT NULL)Here the index FFS is used..
    Let's try one more query, very similar to the above query:
    select trunc(grp_id)
    from t
    where trunc(grp_id) is not null
    group by trunc(grp_id)
    | Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    |   1 |  HASH GROUP BY     |      |      1 |     20 |     20 |00:00:00.86 |    3841 |
    |*  2 |   TABLE ACCESS FULL| T    |      1 |    122K|    122K|00:00:00.49 |    3841 |
    Predicate Information (identified by operation id):
       2 - filter(TRUNC("GRP_ID") IS NOT NULL)And again no index full scan..
    So my next question is:
    What are the restrictions which prevent index (fast) fullscan to be used in these scenarios?
    Thank you very much for your answers.
    Edited by: user1175494 on 16.11.2010 15:23
    Edited by: user1175494 on 16.11.2010 15:25

    I'll start off with the caveat that i'm no Johnathan Lewis so hopefully someone will be able to come along and give you a more coherent explanation than i'm going to attempt here.
    It looks like the application of the MIN function against the case statement is confusing the optimizer and disallowing the usage of your FBI. I tested this against my 11.2.0.1 instance and your query chooses the fast full scan without being nudged in the right direction.
    That being said, i was able to get this to use a fast full scan on my 10 instance, but i had to jiggle the wires a bit. I modified your original query slightly, just to make it easier to do my fiddling.
    original (in the sense that it still takes the full table scan) query
    with data as
      select
        case state -- 1st expression
          when 0 then grp_id
          when 1 then grp_id
          when 2 then grp_id
          when 3 then grp_id
          when 4 then grp_id
        end as grp_id,
        case state --second expression
              when 0 then some_date
              when 1 then some_date
              when 2 then some_date
              when 3 then some_date
              when 4 then some_date
        end as mintime
      from t
      where
        case state --1st expression: at least one index column has to be not null
          when 0 then grp_id
          when 1 then grp_id
          when 2 then grp_id
          when 3 then grp_id
          when 4 then grp_id
        end is not null
      and
        case state --second expression
              when 0 then some_date
              when 1 then some_date
              when 2 then some_date
              when 3 then some_date
              when 4 then some_date
        end is not null 
    select--+ GATHER_PLAN_STATISTICS
      grp_id,
      min(mintime)
    from data
    group by grp_id;
    SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL, NULL, 'allstats  +peeked_binds'));
    | Id  | Operation        | Name | Starts | E-Rows | A-Rows |      A-Time   | Buffers |
    |   1 |  HASH GROUP BY        |       |      2 |      33 |       40 |00:00:00.07 |    7646 |
    |*  2 |   TABLE ACCESS FULL| T       |      2 |      33 |     4000 |00:00:00.08 |    7646 |
    Predicate Information (identified by operation id):
       2 - filter((CASE "STATE" WHEN 0 THEN "GRP_ID" WHEN 1 THEN "GRP_ID" WHEN 2
               THEN "GRP_ID" WHEN 3 THEN "GRP_ID" WHEN 4 THEN "GRP_ID" END  IS NOT NULL AND
               CASE "STATE" WHEN 0 THEN "SOME_DATE" WHEN 1 THEN "SOME_DATE" WHEN 2 THEN
               "SOME_DATE" WHEN 3 THEN "SOME_DATE" WHEN 4 THEN "SOME_DATE" END  IS NOT
               NULL))
    modified version where we prevent the MIN function from being applied too early, by using ROWNUM
    with data as
      select
        case state -- 1st expression
          when 0 then grp_id
          when 1 then grp_id
          when 2 then grp_id
          when 3 then grp_id
          when 4 then grp_id
        end as grp_id,
        case state --second expression
              when 0 then some_date
              when 1 then some_date
              when 2 then some_date
              when 3 then some_date
              when 4 then some_date
        end as mintime
      from t
      where
        case state --1st expression: at least one index column has to be not null
          when 0 then grp_id
          when 1 then grp_id
          when 2 then grp_id
          when 3 then grp_id
          when 4 then grp_id
        end is not null
      and
        case state --second expression
              when 0 then some_date
              when 1 then some_date
              when 2 then some_date
              when 3 then some_date
              when 4 then some_date
        end is not null 
      and rownum > 0
    select--+ GATHER_PLAN_STATISTICS
      grp_id,
      min(mintime)
    from data
    group by grp_id;
    SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL, NULL, 'allstats  +peeked_binds'));
    | Id  | Operation           | Name        | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    |   1 |  HASH GROUP BY           |            |      2 |     20 |     40 |00:00:00.01 |      18 |
    |   2 |   VIEW                |            |      2 |     33 |   4000 |00:00:00.07 |      18 |
    |   3 |    COUNT           |            |      2 |      |   4000 |00:00:00.05 |      18 |
    |*  4 |     FILTER           |            |      2 |      |   4000 |00:00:00.03 |      18 |
    |*  5 |      INDEX FAST FULL SCAN| FIDX_ACTIVE |      2 |     33 |   4000 |00:00:00.01 |      18 |
    Predicate Information (identified by operation id):
       4 - filter(ROWNUM>0)
       5 - filter(("T"."SYS_NC00006$" IS NOT NULL AND "T"."SYS_NC00007$" IS NOT NULL))

  • Database query using report parameters in Report Viewer 2008 and CR-XI

    Hi - I have written a report for in-house use on an Access database. The report uses parameters to filter the record selection for reporting. The report operates fine in Crystal XI, but in Crystal Report Viewer 2008, the parameters are not shown and appear to be inaccessible. Is there a way to query the database in CRV2008, as opposed to simply showing the data that was saved from CR-XI in the report? Would I have to use the Crystal Server to do this? Is there a no-cost solution that I could use instead of CRV2008 for such functionality?
    Thanks - Barney

    Hi Barney,
    As noted CRV is for reports with saved data only, no prompting allowed. I'll move this post to the OnDemand forums, possibly they have suggestions a s work around.
    You should call our Sales team and describe your situation, possibly they have a low cost solution for you.
    If you have a .NET or Java developer available you could always write and host your own WEB app that would be simple to use and should not take to much to do.
    We have samples that you could use so most of the work is done for you.
    Thank you
    Don
    CORRECTION: Report Viewer is handled by the Report Design Forum so moving the psot back
    Edited by: Don Williams on Jan 4, 2010 9:42 AM

  • Displaying results from a database query using servlets

    I have this HTML form where users can search a MS Access database by entering a combination of EmployeeID, First name or last name. This form invokes a Java Servlet which searches the database and displays the results as an HTML page. I am giving the user the choice of displaying 3, 5 or 10 results per page. I want to know how to do that using servlets. If a particular search results in 20 results, the results should be displayed in sets of 3, 5 or 10 depending on the user's choice. If the user makes a choice of 5 results per page then 4 pages should be displayed with a "next" and "previous" button on each page. I want to know how this can be done.

    Arun,
    I'm not sure how to do this using JSP as I have not worked on JSP.
    But I can give you a hint on how to do this within normal java class as I've used this in my current project.
    In your core class/bean that generates the entire resultset, you need to run a loop that will scan through the desired number of records in the resultset.
    To do this, you have to have skip and max parameter in your URL.
    Somthing like http://server.com/myservlet?skip=0&max=10 to display first 10 records, http://server.com/myservlet?skip=10&max=10 to display next 10 records. The <next>parameter would be fed in by the user by a simple form in your web-page. If you need to hold this <max-num-of-recs-per-page> param, you can store it in a cookie (since this is nothing crucial piece of info, don't need to use session obj etc...cookie will do) and get the value each time you display the resultset.
    So, essentially, suppose you are at the first page and you wish to show 10 recs at a time. The link for "Next" button would be http://server.com/myservlet?skip=10&max=10
    when at the second page, you'll have
    "priv" button as http://server.com/myservlet?skip=0&max=10 and
    "next" button as http://server.com/myservlet?skip=20&max=10 and so on...
    hope this makes sense..
    Shantanu

  • Syntax for Database query on a simple java appl?

    Connection con;
    private boolean conFree = true;
    private String dbName = "java:comp/env/jdbc/gene";
    public geneDB () throws Exception {
    try{
         Context ic = new InitialContext();
         DataSource ds = (DataSource) ic.lookup(dbName);
    con = ds.getConnection();
    catch (Exception ex){
    throw new Exception("Couldn't open connection to database: " + ex.getMessage());
    String insertStatement =     "insert into gene (cds,status) values(" + cds + "," + status + ")";
    PreparedStatement prepStmt = con.prepareStatement(insertStatement);
    prepStmt.setString(1, cds);
    prepStmt.setString(2, status);
    prepStmt.executeUpdate();
    prepStmt.close();
    Any comments??? This is what i have done, pls comment....

    actually i m having some kind of trouble and i would like to confirm about the code design suggestion regarding about the database query using pointbase through a java web service.

  • Use Preparedstatement to delete several rows from database

    Hello everyone,
    I am trying to delete multiple rows from database at one time using Preparedstatement.
    It works well when I tried in SQL directly,the sql query is as follows:
    delete from planners_offices where planner ='somename' and office in ( 'officeone', 'officetwo', 'officethree')
    I want to delete those 3 rows at one time.
    But when I am using preparedstatement to implement this function, it does not work. It did not throw any exception, but just does not work for me, the updated rows value always returns "0".
    Here is my simplified code:
    PreparedStatement ps = null;
    sqlStr = " delete from PLANNERS_OFFICES where planner = ? and office in (?) "
    Connection con = this.getConnection(dbname);
    try
    //set the sql statement into the preparedstatement
    ps = con.prepareStatement(sqlStr);
    ps.setString(1,"somename");
    ps.setString(2,"'officeone','officetwo','officethree'");
    int rowsUpdated =ps.executeUpdate();
    System.out.println(rowsUpdated);
    //catch exception
    catch (SQLException e)
    System.out.println("SQL Error: " + sqlStr);
    e.printStackTrace();
    catch (Exception e) {
    e.printStackTrace();
    } finally {
    this.releaseConnection(dbname, con);
    try{
    ps.close();
    }catch (SQLException e){
    e.printStackTrace();
    rowsUpdated always give me "0".
    I tried only delete one record at one time, "ps.setString(2, "officeone");", it works fine.
    I am guessing the second value I want to bind to the preparedstatement is not right, I tried several formats of that string, it does not work either.
    Can anyone give me a clue?
    Thanks in advance !
    Rachel

    the setString function in a preparedStatement doesn't just do a replace with the question mark. It is doing some internal mumbojumbo(technical term) to assign your variable to the ?.
    If you are looking to do your statement, then you will need to put in the correct of # of question marks as you need for the in clause.
    delete from PLANNERS_OFFICES WHERE PLANNER = ? and office in (?,?,?)
    If you need to allow for one or more parameters in your in clause, then you will need to build your SQL dynamically before creating your prepared statement.
    ArrayList listOfOfficesToDelete ;
    StringBuffer buffer = new StringBuffer("DELETE FROM PLANNERS_OFFICES WHERE PLANNER = ? AND OFFICE IN (");
    for(int i = 0; i < listOfOfficesToDelete.size(); i++ )
       if( i != 0 )
           buffer.append(",");
       buffer.append("?");
    buffer.append(")");
    cursor = conn.prepareStatement( buffer.toString() );
    cursor.setString(1, plannerObj);
    for(int i = 0; i < listOfOfficesToDelete().size(); i++ )
        cursor.setString(i+1, listOfOfficesToDelete.get(i) );
    cursor.executeUpdate() ;

  • SAP / ABAP Query - using logical database

    Hi ,
    We have a mandate to implement SAP Query using only Logical Databases (LDB ) .
    We understand that there are several issues using this approach .
    1 ) Paralled tables in MM need to be displayed on separate lines .
    2 ) Statistics based on fields from 2 different tables cannot be produced eg: EKPO ( PO Number )  and EKKN . (Cost Center )
    Please share your experiences .
    Thank you in advance .
    Kishore
    Kishore

    Adeel,
    I do appreciate your experience and respect you for knowledge on SAP Query. 
    Joining tables seem simple to IT experts but not so for end users.  SAP Query is an end-user tool and users seem to need a simple user-friend, drag-and-drop solution to extract information using SAP Query based on a pre-defined infoset.  Various SAP conferences have been advocating the use of SAP delivered Logical Databses to create infosets in order to harness the various advantages of LDBs. 
    In MM there are several LDBs (e.g. ELM, EBM etc) to create queries on EKPO and EKKN.  The problem arises when you use the same LDB to extract information from more than the above two tables e.g. EKPO, EKET, EKKN and EKBE.  SAP expects you to display fields on multiple lines and also does not allow producing statistics based on fields from two parallel tables say EKKN and EKBE.  Moreover, multi-line reports cannot be produced in the ALV/SLV format.  
    We are also looking for best practise solutions in providing SAP Query to MM or FI users based on SAP delivered Logical Databases.  
    Pascal

  • SQLite encrypted Database does not get attached Using Adobe Air,Why?

    Hi,
    Any one knows the solution, am trying to attach the encrypted SQLite database adobe air-adobe flex bulder , it does not get attached using sqlconnection.attach throws error, though the given key is correct, but it gets open using sqlconnection.open with the same key, any one knows the solution, how to attach the encrypted data base, since am using two data base one is opened and another must be attached to the existing ,thanks in advance. using adobe air- flex related. i use the following code
                   databaseFile1 = File.applicationStorageDirectory.resolvePath("Sample_1.sqlite");
                   databaseFile2 = File.applicationStorageDirectory.resolvePath("Sample_2.sqlite");
    dbConnection.open(databaseFile1, SQLMode.CREATE, false, 1024, secKey);
    dbConnection.attach("db2",databaseFile2,null,secKey);
    got the following error.
    ERROR #3125 Unable to open the database file.

    And I would say more "this is the issue" !
    It should be possible as it is clearly stated in the doc :
    public function attach(name:String, reference:Object = null, responder:Responder = null, encryptionKey:ByteArray = null):void
    with 
    encryptionKey:ByteArray (default = null) — The encryption key for the database file. If the attach() call creates a database, the database is encrypted and the specified key is used as the encryption key for the database. If the call attaches an existing encrypted database, the value must match the database's encryption key or an error occurs. If the database being attached is not encrypted, or to create an unencrypted database, the value must be null (the default).
    so with a same encryptionkey, I (and this should be the same for FinalTarget) can open the encrypted db but not to attach it... quite strange.

  • How to use user defined function in select query using Toplink

    Hi Friends
    I am little bit of new in Toplink stuff... so please help me...
    I have to database functions 1. encrypt and 2. decrypt.
    I want to exceute the following sql query using toplink
    select port, database, user_name, decrypt(encrypt('String which is to be encrypt ','password'),'password') from CONFIGURATION
    can anyone tell me , how to write code in toplink which will give the about sql output.
    thanks .....

    The "Specifying a Custom SQL String in a DatabaseQuery" section in the TopLink Developer's Guide may help... http://download-uk.oracle.com/docs/cd/B32110_01/web.1013/b28218/qrybas.htm#BCFHDHBG

  • Database query in toplink

    hi all,
    I need to fetch single row from table and have to print it on JSF page in outputText and inputputText boxes....
    what should be the procedure???
    how to write the query?
    and the resultset that is retrieved is in the form of vector how to put their values in outputText boxes?

    Hello,
    Your questions are pretty broad, but let me address the TopLink aspects of your questions. Once you have an object retrieved via TopLink, then you can consider your JSF presentation of the data that is housed within the object.
    Lets say you have a java class named X, and you want instances of X persisted to a table that is also named X. Now you are interested in using TopLink to hydrate an instance of X so that you can render some information for that instance, which contains of course data from table X.
    In general, you TopLink map X.class to the table named X with all of the columns in the table mapped to instance attributes defined on X.java or one of its superclasses.
    Now the question is how to locate the row of interest from table X? Since you are interested in only one row, it is most likely that you are interested in identifying that row of interest by its primary key, which is likely some numeric surrogate key.
    Lets assume you are interested in retrieval by primary key. Lets also assume that the X_ID column on the X table is its primary key. Lets also assume the id instance attribute on X.java (or from one of its superclasses) is TopLink mapped and is simply a direct to field mapping, so the id instance attribute on X.java would take on the value housed in X.X_ID. Lets further assume you want to find the instance of X where X_ID = 123.
    The resultset that is retrieved via TopLink does not have to be a Vector (List) of objects. You can express to TopLink that you are only interested in retrieving one object by use of oracle.toplink.queryframework.ReadObjectQuery or oracle.toplink.sessions.Session.readObject().
    If you choose the Session.readObject() approach, here is how you obtain the reference to the instance of X that represents the row where X.X_ID = 123:
    obtain reference to Session via SessionManager.getManager().getSession(sessionName);
    ExpressionBuilder builder = new ExpressionBuilder();
    Expression selectionCriteria = builder.get("id").equal(123);
    X foundInstance = (X) session.readObject(X.class,selectionCriteria);
    If you choose the ReadObjectQuery approach, here is how you obtain the reference:
    ReadObjectQuery query = new ReadObjectQuery();
    query.setReferenceClass(X.class);
    query.setSelectionCriteria(selectionCriteria); // defined above
    X foundInstance = (X) session.executeQuery(query);
    Once you have a handle to the X instance of interest, you can then go about rendering information about it to an end user via a JSP or JSF JSP page. Typically the queries that I show above are written as part of the implementation of an operation on a Session EJB, but it can be done without a Session EJB as well.
    I suggest you take a look at the TopLink Developer's Guide : http://download-west.oracle.com/docs/cd/B31017_01/web.1013/b28218/toc.htm as well as the TopLink API Reference from Javadoc : http://download-west.oracle.com/docs/cd/B31017_01/web.1013/b28219/index.html. Both of these documents are for TopLink 10.1.3.1, but you probably want to find the documentation for the version of TopLink you are using just in case.
    Hope that helps,
    Doug

  • Using PreparedStatement to execute a SQL Query

    hi All,
    I am trying to use PreparedStatement to execute a Query in Java.
    The Problem is that where Clause of that Query is dynamically formed as per
    user inputs .
    So , in this case will it help if I use PreparedStatement in place of Normal Statement ?
    If Yes , then how to handle the Dynamic where clause of the Query ?
    Thanks in Advance.
    Regards,
    ninad

    Let's say the user is providing a name, and you'r querying based on that name: PrepartedStatement ps = con.prepareStatement("select * from my_table where name = ?);
    ps.setString(1, nameFromUser);
    ResultSet rs = ps.executeQuery();
    http://java.sun.com/developer/onlineTraining/Database/JDBC20Intro/

  • How to do linguistic sort query using Toplink API

    Hi,
    I want to write a Toplink query using linguistic sort like the following SQL,
    SELECT * from emp_name
    where ename LIKE 'müller%'
    ORDER BY NLSSORT(ename, 'NLS_SORT=german')
    I got two questions:
    (1) how to implement "ORDER BY NLSSORT(name, 'NLS_SORT=german')" using Toplink API.
    (2) I built linguistic index,
    CREATE INDEX nls_index ON emp_name (NLSSORT(ename, 'NLS_SORT = german'));
    But SQL plan shows that LIKE clause does not use index. I wonder how to enforce query with LIKE clause to use such linguistic index.
    Thanks for your help in advance.
    -Evan

    Looks like we can not do an exact word search using Java API.

  • ABAP query using logical database KDF is not populating custom fields

    Hi Experts ,
    I created two following queries
    1.       VENDORCATKDF – uses KDF logical database
    2.       VENDORCATLFA1 – uses table = LFA1
    I’m pulling the same information in both queries:
    ·         Vendor Number
    ·         Country
    ·         Vendor Name
    ·         Vendor Category  (custom fields added to LFA1)
    The results for the query that uses the logical database KDF is incorrect.  It doesn’t pull in the flag on the custom field LFA1-ZMRO.   Even though the logical database KDF is made up of the table LFA1 and has these fields. 
    Is there something that can be done – so that all of these “custom” category fields under LFA1 (such as LFA1-ZZMRO) – get pulled into queries – when we use the logical database KDF ?

    Hi,
    I have got the error removed by ensuring that fields from one table are a part of one line ( taking help of ruler) only. But the underlying problem remains, the output is not ALV but List output.
    I do not think having additional fields in the query is reason for this.
    Is it bcoz iI am adjusting the output length of columns to ensure no hierarchical error ?
    Can we not have a query using LDB which is shown as SAP List?
    Regards,
    Garima.

  • Farm Remote App 2012 R : Your system administrator does not allow the use of default credentials to log on to Work Resources

    Hi
    Here is the situation:
    I have a Farm with 3 servers W2012R2 in a Domain
    Server1                           Server 2                                  
    Server3
    RDSession Host            RDSession Host                            
    RDSession Host
    Connection Broker        Connection Broker (Passive)
    RD Web Access
    2 DNS Alias : - poc.mydomain.local (Use for the RD Web Access and points to Server1
                        -poccb.mydomain.local (Use for the Connection Broker and points to Server1)
    I have setup the Connection broker in HA with Server2 as Passive Server : DNS Round Robin poccb.mydomain.local (Server1)
    The certificate Manager has generated 2 CA certificates :
    - 1 for the RD Web Acc (poc.mydomain.local
    -1 for Connection Broker SSO and for publishing
    I have created 1 Group Policy for these 3 servers and 1 GP for my client Windows 7 SP1.
    Server GPO :
    Computer/Administrative Templates/Windows Components/Remote Desktop Services/Remote Desktop Session Host/Security
    Always prompt for password upon connection=Disabled
    Require use of specific security layer for remote (RDP) connections : SSL (TLS 1.0)
    Set client connection encryption level : High Level
    Client GPO
    Computer/Administrative Templates/System/Credentials Delegation = Allow delegating default credentials (Concatenate OS defaults with input above)
    TERMSRV/POCCB.mydomain.local
    I use no Gateway and in my collection,I have activated SSL (Like in my Server GPO)
    I have now problem with SSO.
    Connection with remote desktop client with server name = poccb.mydomain.local
    Your system administrator does not allow you the use of default credentials to log on to the remote computer poccb.mydomain.local because its identity is not fully verified
    If in my client GPO I add the physical name of the 3 servers, it works :
    TERMSRV/Server1
    TERMSRV/Server2
    TERMSRV/Server3
    Open RDP Files with server name = poccb.mydomain.local
    if my connection broker connects me on Server1 , no problem
    But If I arrive on Server2 & Server 3=
    Your system administrator does not allow the use of default credentials to log on to Work Resources
    I have searched on internet. No result for " to log on to Work Resources"
    Any idea ? Thanks for your help

    Hi,
    Thank you for posting in Windows Server Forum.
    Firstly check that, your user is using domain\username to enter the credential in the dialog box.
    Now for a try, you can edit .rdp file with notepad and just place “enablecredsspsupport:i:0” line in it, save it an launch to check whether you are facing same issue.
    As you are using windows 7 then upgrade to RDP 8.1. Also as you have already enter the FQDN name of server under “Allow delegating default credentials”. For a try please enable and configure for all the remaining settings as follow and check the result.
    Start / Run / gpedit.msc / Computer Configuration / Administrative Templates / System / Credentials Delegation, and make sure you have the following four options enabled and configured:
    Allow Delegating Default Credentials with NTLM-only Server Authentication
    Allow Delegating Default Credentials
    Allow Delegating Saved Credentials
    Allow Delegating Saved Credentials with NTLM-only Server Authentication
    Finally, open a command prompt and use “gpupdate /force” command to apply the policy directly.
    More information:
    Remote desktop credentials did not work
    Hope it helps!
    Thanks.
    Dharmesh Solanki

Maybe you are looking for

  • Mid 2010 macbook pro recall?

    details: Macbook Pro Mid 2010 15 inch Serial Number: W8****AGW Keeps crashing and showing black screen and restarts unexpectedly whilst doing any thing with  MS office and other apps How I can find whether my macbook pro is under Callback option Is t

  • Help "enhancing" dynamic text

    Im am currently working on a flash website and have succeeded in making a scrolling text area... the thing is the text is dynamic and you can't change like one word into one specific font... i've been also trying to link this certain word to a differ

  • I can't download iTunes to my PC.

    I'm having problems trying to install iTunes on my laptop. I've tried several times to download it straight from the iTunes site but it doesn't do anything. So I found a link to download the version that went with my laptop Windows 7 32-bit...It was

  • Restore Connection Error Message HELP

    I'm getting so frustrated right now...not able to do anything cuz my phone keeps giving error message of Radio Off/Restore Connections." I have tried to toggle certain things and take the battery out but nothing. Help me please!

  • Google domain email won't send mail though apple mail

    I just setup a new goole domain email address and have been fully varified with my domain. I set up the account on my imac & macbook pro, both running off Mavericks. I can receive email but not send and I keep getting asked for my email password. It'