Breaking up a long query

Hello,
I am creating a PL/SQL package that produces a report based on the selection of different accounts and dates from a text area and a drop down menu. When I select accounts from the text area, it should run a query to generate a report, i.e.
if tableName = 'human' then
sql_query:= 'SELECT distinct  date || gen || ID  || SSN || DOB || ID1 || ID2 ||
children || address || phone || cell ||
zip || mortgage || loans ||
military|| race || class || sex ||
health|| work|| occupation || dependents ||
tax_info || travel_dates || spouse || insurance ||
children || parents || cars || cash ||
health_benefits || tuition || debt || family ||
income || qol || gas_mileage ||
age || service || CC ||
bills || house || apt ||
AC || DC || temperature ||
room || BC || AD || EF ||
FG || GH || IJ||
KL || LM ||
NO || YES ||
SS || MM ||
NM || LL'||
                     'FROM INFORMATION
                      where VERSION=('''||vs1||''') ' || 'and
                     DATE between '''||startDate||''' and '''||endDate||''' and
                     ID in ('''||accnt||''')'; 
EXECUTE IMMEDIATE sql_query BULK COLLECT INTO query_result;
htp.p(sql_query);
for q in 1..query_result.count loop
htp.p('<OPTION VALUE="'||query_result(q)||'">'||query_result(q));
end loop;
end;I have a few problems.
First, the query does not get printed on the screen...the error that comes says that the identifier is too long.
Second, the query_result does not get printed since the first problem is not solved.
What can I do to split my query up so that it is not as long?

Are you really concatentating that many columns? What does a simple select length of all that return? What line number for the identifier? Why not back off the htp request and try a dbms_output in a SQL*Plus session and see if at least that works?

Similar Messages

  • Long Query Runtime/Web-template Loading time

    Hi,
    We are having a very critical performance issue, i.e. long query runtime, which is certainly not acceptable by client as well.
    <b>Background Information</b>
    We are using web application designer (WAD) 2004s release to design front end of our reports built in BI 7.0 system.
    <b>Problem Area</b>
    Loading of web template on browser
    <b>Problem Analysis</b>
    Query taking so long time to run, whenever we load it through portal or even directly through web application designer. Current runtime for query is more than a min. And I have noticed that 95% of runtime is taken for loading variable screen. FYI – if I run query through Query Designer or BEx Analyzer, it takes 3-5 seconds to execute.
    We have taken all the statistics and everything proves that query is not taking any time to execute but it’s the loading time which creates bottle neck.
    <b>Possible Cause</b>
    Web template holding 11 data providers, 5 of which are based on queries and rest are on query views. These data providers load into memory in parallel which could cause delay.
    These data providers expose detailed variable screens. Out of 21 input fields, exposed by web template, 8 fields are based on hierarchy node variables and 1 on hierarchy variable. And to my knowledge each time hierarchy/hierarchy node variable loads complete hierarchy into memory whenever they are called (in other words, its not performance efficient to use hierarchies).
    I request you to please consider this as matter of high priority and provide me with suggestions to remove bottle necks and make the application performance efficient. Please let me know, if you need any further information.
    Thanks.
    Shabbar

    I would recommend you see how long the query execution actually takes without running from the web template. If actually the individual query takes long time then you need to do some performance improvement on back-end side (aggregates, indexing,... and so on).
    But the performance issue is only with web templates, then you need to find some notes on it, because, I remember we had to apply some notes in relations  of browser taking too long time to load the selection screen in web reports.
    After exhausting all the option, then I will implement precalculating the query result before hand using broadcaster.
    thanks.
    Wond

  • Insert track (song) breaks in a long compilation

    Hello everyone.
    For about a month now, I have been messing with GB and even tried Audacity to find a solution to my problem:
    I have many single, long song DJ compilations on iTunes that consist of different songs smoothly transitioning into the next one to make an hour long single song. When I playback on computer or iPhone, no problem, I can quickly move from one song to another. However when I burn onto a CD and play in the car, I have to hold the forward button FOREVER to get to parts I want to hear.
    How do I insert breaks into the long song, so when I burn onto CD, I can just hit next track and have 15 (or so) single songs to work with?
    I would be eternally grateful for any help.

    HangTime, I think I found what you were saying. I went to Show Podcast Track, than added appropriate markers.
    Now do I just export this song (12 songs) to iTunes by means of: "Share Podcast to iTunes"? I tried that, but got a message that it cannot be exported as enhanced version.

  • Ora-01704 string literal too long error  on long query syntax

    I have a query with more than 4000 characters. I can't seem to get ociparse to accept it. The bind variables are not an issue as I am not concatenating any strings to the query syntax. It is just that my query will all the columns and unions etc exceeds 4000 characters. Any way around this short of hiding it in a view ( which I have already done for other long queries ).
    System:
    PHP 4.3.10
    OCI driver
    Oracle 9i Release 2
    Thanks,
    Bryan

    Misread your post, sorry. Oracle limits literal strings to 4,000 chars. According to the documentation it's required that you use bind variables where possible to shorten literal strings below 4,000. You could also try a pl/sql block.
    The error you're getting is being returned by Oracle, not PHP. I've seen it pop up on bugtraq a couple of times for PHP, but the answer is always the same. I'm more of a programmer than a database expert, so forgive me for not having a better answer. You may want to try posting this to one of the more specific oracle forums where someone will probably have a better answer for you.
    http://www.stanford.edu/dept/itss/docs/oracle/9i/server.920/a96525/toc.htm

  • Is there any way to use Control Break in a SQL Query

    Hi,
    Is there any way to use a control break on Dept column in a SQL query to have a Output-2 instead of Output-1.
    Is there any way to modify the SQL query.
    SQL
    select dept, loc, count(*)
      from dept
    group by dept, locOutput-1
      Dept      Loc       Count(*)
      10         AA        1
      10         BB        2
      10         CC        2
      20         AA        2
      20         BB        2Output-2
      Dept      Loc       Count(*)
      10         AA        1
                 BB        2
                 CC        2
      20         AA        2
                 BB        2Thanks,
    Deepak

    DeepakJ wrote:
    Hi,
    Is there any way to use a control break on Dept column in a SQL query to have a Output-2 instead of Output-1.
    Is there any way to modify the SQL query.
    SQL
    select dept, loc, count(*)
    from dept
    group by dept, locOutput-1
    Dept      Loc       Count(*)
    10         AA        1
    10         BB        2
    10         CC        2
    20         AA        2
    20         BB        2Output-2
    Dept      Loc       Count(*)
    10         AA        1
    BB        2
    CC        2
    20         AA        2
    BB        2
    Yes, using the <tt>lag</tt> analytic function and specified ordering of the data:
    select
        nullif(d.deptno, lag(d.deptno) over (order by d.deptno, d.loc, e.mgr nulls first)) deptno
      , nullif(d.loc, lag(d.loc) over (order by d.deptno, d.loc, e.mgr nulls first)) loc
      , e.mgr
      , count(*) n
    from
        dept d
          join emp e
            on d.deptno = e.deptno
    group by
        d.deptno
      , d.loc
      , e.mgr
    order by
        d.deptno
      , d.loc
      , e.mgr nulls first;
    DEPTNO  LOC       MGR   N
        10  NEW YORK         1
                      7782   1
                      7839   1
        20  DALLAS    7566   2
                      7788   1
                      7839   1
        30  CHICAGO   7698   4
                      7839   1
        40  BOSTON    7698   2
                      7902   1

  • Oracle Strange Long query

    Hello,
    I'm still in the process of finding what is wrong with my oracle connection.
    Everytime a request is done, this query is executed:
    SELECT NULL AS table_cat, t.owner AS table_schem,t.table_name AS
    table_name,t.column_name AS column_name, DECODE (t.data_type, 'CHAR',
    1, 'CLOB', 2005, 'BLOB', 2004, 'VARCHAR2', 12, 'NUMBER', 3, 'LONG',
    -1, 'DATE', 93,'RAW', -3, 'LONG RAW', -4, 'BINARY_FLOAT', 7,
    'BINARY_DOUBLE', 8, 'XMLTYPE',2005, 'BFILE',2004,'NCHAR',1,
    'NVARCHAR2',12, 'NCLOB', 2005, 'ROWID', 12, 'FLOAT', 8, 1111) AS
    data_type, t.data_type AS type_name, decode(t.data_type, 'NUMBER',
    decode(t.data_precision, null, decode(t.data_scale, null, 0, 0, 38,
    t.data_scale), t.data_precision), 'FLOAT', 15, 'CLOB',2147483647,
    'NCLOB', 2147483647, 'LONG', 2147483647, 'BLOB', 2147483647, 'LONG
    RAW', 2147483647, 'BFILE', 2147483647, 'DATE', 19, 'ROWID', 18,
    'BINARY_FLOAT', 7, 'BINARY_DOUBLE', 15,decode(t.data_length, 0, 1,
    t.data_length)) as column_size, 0 AS buffer_length,
    decode(t.data_type, 'NUMBER', decode(t.data_scale, null,
    decode(t.data_precision, null, 0,null), t.data_scale), 'FLOAT',
    NULL,'DATE', 0, NULL) AS decimal_digits, decode(t.data_type,
    'BINARY_FLOAT', 10, 'BINARY_DOUBLE', 10, 'FLOAT', 10, 'NUMBER', 10,
    NULL) AS num_prec_radix, DECODE (t.nullable, 'N', 0, 1) AS nullable,
    NULL AS remarks,NULL AS column_def, null AS sql_data_type, null AS
    sql_datetime_sub, decode(t.data_type, 'VARCHAR2',
    decode(t.data_length, 0, 1, t.data_length), 'CHAR', t.data_length,
    'NCHAR', t.data_length, 'CLOB', 2147483647, 'NCLOB', 2147483647,
    'LONG', 2147483647, 'BFILE', 2147483647, NULL) AS char_octet_length,
    t.column_id AS ordinal_position, DECODE (t.nullable, 'N', 'NO', 'YES')
    AS is_nullable, null as SCOPE_CATLOG, null as SCOPE_SCHEMA, null as
    SCOPE_TABLE, null as SOURCE_DATA_TYPE FROM all_tab_columns t WHERE
    t.owner LIKE 'REI' ESCAPE '\' AND t.table_name LIKE 'G_PAGE' ESCAPE
    '\' AND t.column_name LIKE '%' ESCAPE '\' UNION ALL SELECT NULL,
    asy.owner, asy.synonym_name , t.column_name, DECODE (t.data_type,
    'CHAR', 1, 'CLOB', 2005, 'BLOB', 2004, 'VARCHAR2', 12, 'NUMBER', 3,
    'LONG', -1, 'DATE', 93,'RAW', -3, 'LONG RAW', -4, 'BINARY_FLOAT', 7,
    'BINARY_DOUBLE', 8, 'XMLTYPE',2005, 'BFILE',2004,'NCHAR',1,
    'NVARCHAR2',12, 'NCLOB', 2005, 'ROWID', 12, 'FLOAT', 8, 1111),
    t.data_type, decode(t.data_type, 'NUMBER', decode(t.data_precision,
    null, decode(t.data_scale, null, 0, 0, 38, t.data_scale),
    t.data_precision), 'FLOAT', 15, 'CLOB',2147483647, 'NCLOB',
    2147483647, 'LONG', 2147483647, 'BLOB', 2147483647, 'LONG
    RAW',2147483647, 'BFILE', 2147483647, 'DATE', 19, 'ROWID', 18,
    'BINARY_FLOAT', 7, 'BINARY_DOUBLE', 15,decode(t.data_length, 0, 1,
    t.data_length)), 0, decode(t.data_type, 'NUMBER', nvl(t.data_scale,
    0), 'DATE', 0, 'FLOAT', NULL, NULL) AS decimal_digits,
    decode(t.data_type, 'FLOAT', 10, 'NUMBER', 10, NULL), DECODE
    (t.nullable, 'N', 0, 1), NULL, NULL, null, null, decode(t.data_type,
    'VARCHAR2', decode(t.data_length, 0, 1, t.data_length), 'CHAR',
    t.data_length, 'NCHAR', t.data_length, 'CLOB', 2147483647, 'NCLOB',
    2147483647, 'LONG', 2147483647, 'BFILE', 2147483647, NULL),
    t.column_id, DECODE (t.nullable, 'N', 'NO', 'YES'), null, null, null,
    null FROM all_synonyms asy, all_tab_columns t WHERE t.table_name =
    asy.table_name AND t.owner = asy.table_owner AND t.column_name LIKE
    '%' ESCAPE '\' AND asy.owner LIKE 'REI' ESCAPE '\' AND
    asy.synonym_name LIKE 'G_PAGE' ESCAPE '\' ORDER BY table_schem,
    table_name, ordinal_positionThis is a very long and resource consuming query.
    Why this query is executed everytime I do a commit, select or anything.
    A commit is 30 sec long (http://swforum.sun.com/jive/thread.jspa?threadID=93759&tstart=0).
    Are all Oracle users subject to this problem?
    Regards
    Kuon

    Here's my theory:
    Oracle's jdbc driver (not Sun's) doesn't support rowset.getMetaData() until the query statement is executed. I understand that the database itself doesn't have support. (a year ago this was a problem, I still think so now).
    Sun's jdbc driver for Oracle (repackaged from DataDirect) does support rowset.getMetatData().
    So how does it do this? I don't know exactly, but the only way I can think of is that it is to parse the query, get the database meta data, and use all that to determine the resultSet meta data.
    I think you're seeing the "get the database meta data" request, and that's just taking along time.
    I'd expect that data to be cached by the driver on (at least) a per-connection basis. Doesn't look like though.
    Maybe someone from Sun has the current scoop for plans in this area.
    In the meantime, I think you're hosed if you want to use a CachedRowSet.
    Workarounds would be using plain old jdbc, hibernate, etc., to get the data then wrapping your results into an ObjectArrayDataProvider or whatever.
    Good luck.

  • Too long query

    Hello,
    I'm working on Oracle 11.2.0.3.
    I'm trying to execute this query
    SELECT distinct s, prefLabel,o
    FROM TABLE(SEM_MATCH('PREFIX skos: <http://www.w3.org/2004/02/skos/core#>
    PREFIX orardf:<http://xmlns.oracle.com/rdf/>
    SELECT *
    WHERE
    ?s ?p ?o.
    ?s skos:prefLabel ?prefLabel .
    filter (lang(?prefLabel ) ="fr").
    filter (orardf:textContains(?prefLabel , "famille")).
    SEM_Models('modelinf'),
    SEM_Rulebases('SKOSCORE'),
    null,
    null,
    null,
    null ))
    but it takes too long time.
    I'm not sure that all the necessary index has been create on the database.
    Could you help me to optimize this query ?
    Thanks.
    Cyril.

    Hello,
    this is the execution plan of this query
    SELECT s, prefLabel
    FROM TABLE(SEM_MATCH('PREFIX skos: <http://www.w3.org/2004/02/skos/core#>
    PREFIX orardf:<http://xmlns.oracle.com/rdf/>
    SELECT distinct ?s ?prefLabel
    WHERE
    ?s rdf:type skos:Concept.
    ?s skos:prefLabel ?prefLabel .
    filter (lang(?prefLabel ) ="fr").
    filter (orardf:textContains(?prefLabel , "famille")).
    SEM_Models('modelinf'),
    SEM_Rulebases('SKOSCORE'),
    null,
    null,
    null,
    null ))
    It takes 2.703 seconds for 12 rows
    Plan hash value: 1619577833
    | Id | Operation | Name | Pstart| Pstop |
    | 0 | SELECT STATEMENT | | | |
    | 1 | COLLECTION ITERATOR SUBQUERY FETCH | | | |
    | 2 | COUNT | | | |
    |* 3 | FILTER | | | |
    | 4 | NESTED LOOPS | | | |
    | 5 | NESTED LOOPS | | | |
    | 6 | VIEW | | | |
    | 7 | SORT GROUP BY | | | |
    | 8 | NESTED LOOPS | | | |
    | 9 | NESTED LOOPS | | | |
    | 10 | NESTED LOOPS | | | |
    | 11 | VIEW | | | |
    | 12 | UNION-ALL | | | |
    | 13 | PARTITION LIST SINGLE | | 3 | 3 |
    |* 14 | INDEX RANGE SCAN | RDF_LNK_PCS_IDX | 3 | 3 |
    | 15 | PARTITION LIST SINGLE | | 4 | 4 |
    |* 16 | INDEX RANGE SCAN | RDF_LNK_PCSGM_IDX | 4 | 4 |
    | 17 | VIEW | | | |
    | 18 | UNION-ALL PARTITION | | | |
    | 19 | PARTITION LIST SINGLE | | 3 | 3 |
    |* 20 | INDEX RANGE SCAN | RDF_LNK_PSC_IDX | 3 | 3 |
    | 21 | PARTITION LIST SINGLE | | 4 | 4 |
    |* 22 | INDEX RANGE SCAN | RDF_LNK_PSCGM_IDX | 4 | 4 |
    |* 23 | INDEX UNIQUE SCAN | C_PK_VID | | |
    |* 24 | TABLE ACCESS BY INDEX ROWID| RDF_VALUE$ | | |
    |* 25 | INDEX UNIQUE SCAN | C_PK_VID | | |
    | 26 | TABLE ACCESS BY INDEX ROWID | RDF_VALUE$ | | |
    |* 27 | TABLE ACCESS FULL | RDF_RI_SHAD_5$ | | |
    Predicate Information (identified by operation id):
    3 - filter( NOT EXISTS (SELECT 0 FROM "MDSYS"."RDF_RI_SHAD_5$"
    "RDF_RI_SHAD_5$" WHERE LNNVL("RDF_RI_SHAD_5$"."ID"<>1)))
    14 - access("P_VALUE_ID"=834132227519661324 AND
    "CANON_END_NODE_ID"=8129753520990573772 AND "START_NODE_ID">0 AND
    "START_NODE_ID" IS NOT NULL)
    16 - access("P_VALUE_ID"=834132227519661324 AND
    "CANON_END_NODE_ID"=8129753520990573772 AND "START_NODE_ID">0 AND
    "START_NODE_ID" IS NOT NULL)
    20 - access("P_VALUE_ID"=8569708817671647133 AND
    "START_NODE_ID"="from$_subquery$_007"."START_NODE_ID" AND
    "CANON_END_NODE_ID">0 AND "CANON_END_NODE_ID" IS NOT NULL)
    filter("START_NODE_ID">0)
    22 - access("P_VALUE_ID"=8569708817671647133 AND
    "START_NODE_ID"="from$_subquery$_007"."START_NODE_ID" AND
    "CANON_END_NODE_ID">0 AND "CANON_END_NODE_ID" IS NOT NULL)
    filter("START_NODE_ID">0)
    23 - access("V0"."VALUE_ID"="from$_subquery$_011"."CANON_END_NODE_ID")
    24 - filter("SEM_APIS"."GETV$LANGVAL"("V0"."VALUE_TYPE","V0"."VNAME_PRE
    FIX","V0"."VNAME_SUFFIX","V0"."LITERAL_TYPE","V0"."LANGUAGE_TYPE")='fr'
    AND "CTXSYS"."CONTAINS"("V0"."VNAME_PREFIX",'famille'||'')>0)
    25 - access("R"."S$RDFVID"="V0"."VALUE_ID")
    27 - filter(LNNVL("RDF_RI_SHAD_5$"."ID"<>1))
    Thanks.
    Cyril

  • Long query on Forms

    Hello
    I have a datablock, whose datasource is a view (sourceType = 'table'; sourcename =viewName).
    All of the datablock items are database items (no calculations, etc..)
    The ORDER BY clause has pre-defined values ("date field" DESC)
    The default_where property is built in the form's code.
    Problem is: if the number of records retrieved is low (a few hundred), the form works fine; if is big, it takes a long time to execute the block's query.
    Now, I tried to Optimizer_HInts = 'FIRST_ROWS', and both query_Array_Size and Records_Buffered = 50.
    The time used to retrieve data remains high (for the user seems the form "hangs"), and when the data is shown, it is pretty fast to scroll past the data.
    Isn't it true that, if the datablock was retrieving 50 records each time, the scrolling would take a bit longer?
    My goal is to make the query to execute faster, even if I get less records each time...
    I'm using Oracle Forms 10g on Oracle database 10g
    Thanks for any help

    Hello,
    At first sight, this seems to be a Database modelisation issue. I don't think that Forms would go faster, until you correct the problem in the database. See the whole SQL order generated then check your indexes.
    Francois

  • Breaking Date Range in Query...

    Hi Friends,
    I have a Table which calculates Leaves taken by employees. The Leave Start date and End Date is in Range. i.e. Leave is from say 10th March 2006 to 15th March 2006. I need to generate a report for each day of the Leave. I.e. report needs a record for 10th, 11th,12th,13th,14th,15th. How can I break the date range into individual dates betn that range in a SQL Query..?
    thanks a lot,
    Jalpan Pota

    You can do it with a pipelined function. I have posted Re: Quarters Missing.?? that produces a range of quarters for a given date range. You should be easily able to amend this so that it produces a range of dates. Note that you will need to create a type that is a nested table of dates.
    Cheers, APC

  • SUN TEAM: Bugs in update and delete a record with long query

    Creator Team,
    In my opinio there is a bug issue with update and delete a record with a complex sql query. I�m using oracleXE and ojdbc14.ar with tomcat
    In just two page I�m receving the following msg (I have 12 pages doing the same thing with less complex queries)
    * Number of conflicts while synchronizing: 1 SyncResolver.DELETE_ROW_CONFLICT row 2 won't delete as values in database have changed: 2006-11-29
    * Cannot commit changes: Number of conflicts while synchronizing: 1 SyncResolver.UPDATE_ROW_CONFLICT row 0 values changed in database
    when i tried to delete or commit the updated changes in the register...
    The interesting is that this code function with jdbc of jsc...
    My query is bellow:
    SELECT ALL PATRIMONIO.TB_BEM.INCODIGOBEM,
    PATRIMONIO.TB_BEM.VATOMBAMENTO,
    PATRIMONIO.TB_BEM.VAMATERIAL,
    PATRIMONIO.TB_BEM.INCODIGOSETOR,
    PATRIMONIO.TB_SETOR.VANOME AS NOMESETOR,
    PATRIMONIO.TB_BEM.INCODIGOFORNECEDOR,
    PATRIMONIO.TB_FORNECEDOR.VANOME AS NOMEFORNECEDOR,
    PATRIMONIO.TB_BEM.DACHEGADA ,
    PATRIMONIO.TB_BEM.DASAIDAPREVISTA,
    PATRIMONIO.TB_BEM.DASAIDAEFETIVA,
    PATRIMONIO.TB_BEM.VAMARCA,
    PATRIMONIO.TB_BEM.VAMODELO,
    PATRIMONIO.TB_BEM.VADESBAIXABEM,
    PATRIMONIO.TB_BEM.INCODIGOTIPOAQUISICAO,
    PATRIMONIO.TB_TIPOAQUISICAO.VANOME AS NOMETIPOAQUISICAO
    FROM PATRIMONIO.TB_BEM , PATRIMONIO.TB_TIPOAQUISICAO, PATRIMONIO.TB_SETOR, PATRIMONIO.TB_FORNECEDOR
    WHERE PATRIMONIO.TB_BEM.INCODIGOTIPOAQUISICAO = PATRIMONIO.TB_TIPOAQUISICAO.INCODIGOTIPOAQUISICAO
    AND PATRIMONIO.TB_BEM.INCODIGOSETOR = PATRIMONIO.TB_SETOR.INCODIGOSETOR
    AND PATRIMONIO.TB_BEM.INCODIGOFORNECEDOR = PATRIMONIO.TB_FORNECEDOR.INCODIGOFORNECEDOR
    AND PATRIMONIO.TB_BEM.INCODIGOBEM LIKE ?
    AND PATRIMONIO.TB_BEM.VATOMBAMENTO LIKE ?
    AND PATRIMONIO.TB_BEM.VAMATERIAL LIKE ?
    AND PATRIMONIO.TB_SETOR.VANOME LIKE ?
    AND PATRIMONIO.TB_FORNECEDOR.VANOME LIKE ?
    ORDER BY PATRIMONIO.TB_BEM.VATOMBAMENTO ASC
    Why this problem is happing? Did you had some solution for that? Is the problem because the query is too long?!
    please help me!
    Gustavo Callou

    Hello people,
    I�m doing this to try to solution that bug:
    This code is working fine... but I do not understand why I�m receiving the nullpointer exception!??!!?
    // create a new rowset
    CachedRowSetXImpl pkRowSet = new CachedRowSetXImpl();
    try {
    RowKey rk = tableRowGroup1.getRowKey();
    if (rk != null) {
    // set the rowset to use the Patrimonio database
    pkRowSet.setDataSourceName("java:comp/env/jdbc/Patrimonio");
    String query = "DELETE FROM TB_BEM WHERE INCODIGOBEM = "+tb_bemDataProvider.getValue("INCODIGOBEM", rk).toString();
    pkRowSet.setCommand(query);
    pkRowSet.setTableName("TB_BEM");
    // execute the rowset -- which will contain a single row and single column
    pkRowSet.execute();
    pkRowSet.next();
    info("Apagado");
    } catch (Exception ex) {
    log("ErrorDescription", ex);
    error(getApplicationBean1().trateException(ex.getMessage()));
    } finally {
    pkRowSet.close();
    Please someone help me!!!
    Gustavo Callou

  • Send a break to interrupt a query?

    With SQL plus it is possible to interrupt a long running query. This is being discussed in this thread:
    Killing long running SQL query
    Is something like this possible in APEX?
    Rene

    Hello Rene,
    I've never played it my self, but in the APEX utility tab, database monitor, session section, you can see a list of sessions relevant to APEX, and you have the option to kill a session. You should check it and see if it can help you.
    Regards,
    Arie.

  • GET Method with long query string

    Hi there,
    Not sure if this has already been answered. Sorry if it has!
    I have a Biztalk application which does a pass-through for all http requests. It is using WCF-WebHttp transport type with URL mapping of /*.
    It works fine except for GET method that has query string longer than 256 characters. It chokes with following exception:
    The adapter "WCF-WebHttp" raised an error message. Details "System.ArgumentOutOfRangeException: The value of promoted property cannot exceed 256 characters. Property "To" Namespace "http://schemas.microsoft.com/BizTalk/2006/01/Adapters/WCF-properties".
    My question is is there a workaround for this e.g. extend the string length limit? 

    Hi Karsten,
    Try giving the one part of URL in address box and other pass the arguments inside HTTP Method and URL Mapping dialog.
    Eg:
    Address (URI) : https://btstecheddemostorage.blob.core.windows.net
    <BtsHttpUrlMapping>
    <Operation Name="ListFiles"
    Method="GET" Url="/{mycontainer}?restype=container&amp;comp=list"
    /> </BtsHttpUrlMapping>
    Thank YOu,
    Tamil

  • How long query took to run

    i ran some queries last week and want to know how long they took to run? Do you how to find out?
    Thank you

    Try out transaction code St03....
    before that make sure you check following option.
    RSA1> Tools > BW statistics option for infoprovider (select ODS or cube over which u r running query)
    - as according to some experts
    You enter ST03, select Expert Mode, then click on BW System Load in left frame.
    This should open another fram on the bottom left with a category called Analysis views. Under taht should be an entry for Query Run Times. You can double click on that or drill down to type of query - BEX, WEB, API, or ODBO.
    The results are at an InfoProvider level, then double click on the InfoProvider and you should get query level totals. You can NOT get the timings for a specific navigation from ST03 ( or ST03N - they're the same thing as long as you are on a recent enough version ).
    To get the timings of a single navigation - Use the info from RSDDSTAT (think they have new names in 2004s) or if you are loading the Tech COntent cubes you can query those.
    Another option is to run the query thru RSRT in Execute & Debug mode, selecting the Display Statistics Data option. After the query results are return, click on the green Back arrow and you'll get the query stats. The total OLAP time however includes any human think time if you need to fill in variables for the query. But you also get the DB, OLAP Processing, Front End times etc, which are good for comparison.
    ***pls assign the points, if info is useful*
    Regards
    CSM Reddy

  • How to treat data in a long query

    I have a query (over a table connected by dblink) that returns me a lot of rows (over 7 millions), when I try to got data at each row i got an out of memory problem. So I want to view records in legs, but i don't know how to do that in my procedure.
    The query is asigned to a cursor and I pass the data I need of each row to another table.
    This is the code I have:
    CURSOR r_cursor
    IS
    SELECT *
    FROM med_principio_activo@midblink
    cursor1 r_cursor%ROWTYPE;
    begin
    LOOP
    FETCH r_cursor
    INTO cursor1;
    EXIT WHEN r_cursor%NOTFOUND;
    SELECT MAX(SID)
    INTO medic
    FROM medicam
    WHERE cursor1.codmedic LIKE xkey;
    SELECT MAX(SID)
    INTO ppioact
    FROM activeingredient
    WHERE cursor1.codppioact LIKE xkey;
    INSERT INTO med_medicamentai
    (medicament, activeingredient
    VALUES (medic, ppioact
    COMMIT;
    END LOOP;
    I want to do legs of 10000 records to read all info and got memory free, but i don't know how to do that...
    Thnaks,

    Hi,
    instead of using PL/SQL using cursor you could have insert directly in the table,
    like
    INSERT INTO med_medicamentai
    (medicament, activeingredient
    VALUES (
    Before that we need to know how the data in med_principio_activo@midblink
    ,medicam and activeingredient are related.
    If you want to use cursor, then bulk collect will improve the performance.
    Please look at this link for more info.
    http://download.oracle.com/docs/cd/B14117_01/appdev.101/b10807/12_tune.htm#i49139
    Thanks

  • Axis timeout error when running long query

    Hi all,
    we've encountered a timeout problem trying to call webservice that responses in long delay. Error we got comes from axis library:
    java.net.SocketTimeoutException: Read timed out
    As I know, default timeout value for axis is 60 s. Is there any way to change it in XMLP?

    Hi Tim,
    Thanks for the follow-up!
    Is there any way we can tweak this manually for now for the version 5.6.2 of XML Publisher? I don't feel like its time now for us to go forward to BI Publisher 10.1.3 since we are late in our delivery but we will certainly evaluate the possibility of upgrading it to 10.1.3 for the next delivery. I just got a SR soft closed saying:
    "If they are putting it into 10.1.3.4 then they do not have a fix yet and you are asking for a backport 4 versions back which is unlikely.
    But you should ask in the thread."
    So thats what I'm doing now, hoping its possible! Up to now, we haven't found any workaround for this issue.
    Thanks for your help!

Maybe you are looking for

  • Check deposit process

    hi, 1) may i know what document i should enter in reference doc number column in FF68 when enter in check deposit list item screen? is it 53# or 45# from F-28? 2) what is the process for FF68? F-28 then FF68 or FF68 then only F-28 or F-28 is not need

  • Regarding  text uploading in bdc

    hi abap groups i am doing text uploding through bdc for c201.i do not know how to use save_text function module.hoping for u r wonderful support. regards bala.

  • My firefox desktop icon is not the firefox logo. it is a file folder icon. how do i fix this

    my firefox desktop icon is not the firefox logo. It is a file folder How do I load the Icon I use windows 7

  • 10.5.7 Photoshop/ InDesign CS3

    Our graphic designer department just got upgraded from 10.4 Tiger to 10.5 Leopard. Most are complaining that InDesign crashes on them sometimes twice a day and other times works all day then the next day crashes. Our policy in the corp. that files ge

  • How do I change the size of the point tool in an image

    Dear LabVIEW fans I want to have a very large "Point Tool" in an image display, actually it should cover the whole image as a crosshair. Does anyone know how do this? I've searched among the long list of image properties, but haven't found anything u