Index is fragmented
How can we know – Index is fragmented and re-built is required
By Searching myself got some references:
i am including hereby:
From Sridhar's Oracle pages:
http://oracle-core.blogspot.com/2007/04/moving-rebuilding-and-or-resizing.html
How to identify indexes with "excessive" logically deleted rows.
The first place to look is at any dynamic table I.E. Interface and temporary tables.
Once the target tables have been identified, obtain a list of index names by using the following example SQL*PLUS script.
SELECT index_name
FROM dba_indexes
WHERE owner = 'SCOTT'
and table_name = 'EMP'
For each of the indexes identified, execute the validate index command followed by the SQL*PLUS script shown below to obtain the results of the validate index command. Note: The index_stats table holds only one row.
SQL> validate index gl.gl_interface_n1
Index analyzed.
SQL>
Use the following SQL*PLUS script to obtain the results.
column A heading "Index Name" format a30
column B heading "Rows" format 999,999,990
column C heading "Deleted|Rows" format 999,999,990
column D heading "% Del" format 990.0
SELECT name A,
lf_rows B,
del_lf_rows C,
(del_lf_rows * 100 ) / lf_rows D
FROM index_stats
I usually rebuild indexes that have more than 20 % logically deleted rows.
Rebuilding an index.
The following PL*SQL script will build an SQL*PLUS script to drop and then recreate the index.
Note: The storage clause and the index owner's password in the output file will have to be modified before you execute the resulting script. Modifying the storage clause will allow you to resize and or move the index.
clear columns
set serveroutput on;
set echo off
set heading off
set feedback off
set verify off
WHENEVER SQLERROR CONTINUE
/** OBTAIN COMMAND LINE PARAMETERS **/
/** OWNER = Index owner **/
/** INDEX_NAME = Index name **/
DEFINE owner = '&1'
DEFINE index_name = '&2'
DECLARE
/** CURSOR C1 Get the index columns **/
CURSOR C1 is
SELECT column_name
FROM dba_ind_columns
WHERE index_owner = upper('&&owner')
and index_name = upper('&&index_name')
ORDER by column_position;
/** Decalre variables **/
d_column_name VARCHAR(30);
d_table_name VARCHAR(30);
d_uniqe VARCHAR(9);
d_max_columns NUMBER(4) := 0;
d_column_counter NUMBER(4) := 0;
BEGIN
dbms_output.put_line('/****************************************/' );
dbms_output.put_line('/* Drop and create index script */' );
dbms_output.put_line('/* */' );
dbms_output.put_line('/* Dont forget to edit this file for: */' );
dbms_output.put_line('/* */' );
dbms_output.put_line('/* 1. Connect password */' );
dbms_output.put_line('/* 2. Storage params */' );
dbms_output.put_line('/* */' );
dbms_output.put_line('/****************************************/' );
dbms_output.put_line('');
dbms_output.put_line('');
dbms_output.put_line('connect &&owner/password' );
dbms_output.put_line('');
dbms_output.put_line('');
dbms_output.put_line('DROP INDEX &&index_name;' );
dbms_output.put_line('');
dbms_output.put_line('');
/** Determine the number of columns **/
/** in the index **/
SELECT max(column_position)
INTO d_max_columns
FROM dba_ind_columns
WHERE index_owner = upper('&&owner')
and index_name = upper('&&index_name');
SELECT table_name, uniqueness
INTO d_table_name, d_uniqe
FROM dba_indexes
WHERE owner = upper('&&owner')
and index_name = upper('&&index_name');
/** Determine if the index is uniqe **/
SELECT uniqueness
INTO d_uniqe
FROM dba_indexes
WHERE owner = upper('&&owner')
and index_name = upper('&&index_name');
dbms_output.put_line('CREATE '|| d_uniqe ||' INDEX &&index_name' );
dbms_output.put_line('ON '||d_table_name );
dbms_output.put_line('(');
OPEN C1;
LOOP
FETCH C1 INTO d_column_name;
d_column_counter := d_column_counter + 1;
EXIT WHEN C1%NOTFOUND;
IF d_column_counter < d_max_columns THEN
dbms_output.put_line( d_column_name||',' );
ELSE
dbms_output.put_line( d_column_name||' )' );
END IF;
END LOOP;
dbms_output.put_line('STORAGE ( initial X M next X M');
dbms_output.put_line(' minextents 1 maxextents 50');
dbms_output.put_line(' pctincrease 0 );');
END;
NOTE 1: If the index is being used by another user, you will not be able to either validate or drop the index.
NOTE 2: Beware of the primary key constraint.
General details about the index can also be found by:
Analyze index compute statistics;
Select * from user_indexes
where index_name= ‘’;
To obtain further detail about an index:
Analyze index validate structure;
The command:
Validate index ;
Performs the same function.
This places detailed information about the index in the table INDEX_STATS. This table can only contain one row, describing only the one index. This SQL also verifies the integrity of each data block in the index and checks for block corruption.
For example, to get the size of an index:
validate index ;
select name "INDEX NAME", blocks * 8192 "BYTES ALLOCATED",
btree_space "BYTES USED",
(btree_space / (blocks * 8192))*100 "PERCENT USED"
from index_stats;
This assumes a block size of 8K (i.e. 8192 bytes). It shows the number of bytes allocated to the index and the number of bytes actually used.
Note that it does not confirm that each row in the table has an index entry or that each index entry points to a row in the table. To check this:
Analyze table validate structure cascade;
so i am thanking MR. Sridhar Kasukurthi hereby
Similar Messages
-
Index still fragmented after rebuild
I have a database 2008R2 with 97% of average fragmented index and page counts = 164785 and the index table is more than 300,000 rows.
I use the ola.hallengren script to optimize index and the stat above provided by the IndexCheck.sql script.
My question is why after the index fragmentation above has been rebuilt the fragmentation is very much the same as before. The indesc has been rebuilt because it's lager than 30% and page count is larger than 1000 .
However, if a copy of that database has been restored on a different server, the fragmentation is much below than 30%.Results with the headers
name
is_primary_key
is_unique
index_depth
index_level
fragment_count
page_count
record_count
avg_fragment_size_in_pages
avg_fragment_size_in_pages
avg_page_space_used_in_percent
PK_zfAuditTStudentClass
1
1
3
2
1
1
578
1
1
92.80948851
PK_zfAuditTStudentClass
1
1
3
1
462
578
197354
1.251082251
1.251082251
54.81539412
PK_zfAuditTStudentClass
1
1
3
0
4504
197354
34734021
43.81749556
43.81749556
99.99919694
IX_zfAuditTStudentClass
0
0
4
3
1
1
5
1
1
1.519644181
IX_zfAuditTStudentClass
0
0
4
2
5
5
852
1
1
52.60686929
IX_zfAuditTStudentClass
0
0
4
1
847
852
166236
1.005903188
1.005903188
60.23989375
IX_zfAuditTStudentClass
0
0
4
0
163026
166236
34734021
1.019690111
1.019690111
51.60465777 -
Objects (table, index, partition) fragmented
Hi all,
I run the script bellow to find fragmented objects:
select segment_name, segment_type, count(*) from dba_extents
where owner = ownname
group by segment_name, segment_type
having count(*) > 1
order by 3
And i get some liste like this:
object_names object_types 249
My question is the objects that i have are fragmented and what can i do to solve that fragmentation i mean to avoid object fragmentation.
regards
raistarevoHi,
You can go to the 8i documentation for more information:
http://download-west.oracle.com/docs/cd/A87860_01/doc/appdev.817/a76936/dbms_s2a.htm#1004668
But I advice you to read this thread below first.
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:8381899310385
Cheers -
How does Index fragmentation and statistics affect the sql query performance
Hi,
How does Index fragmentation and statistics affect the sql query performance
Thanks
Shashikala
ShashikalaHow does Index fragmentation and statistics affect the sql query performance
Very simple answer, outdated statistics will lead optimizer to create bad plans which in turn will require more resources and this will impact performance. If index is fragmented ( mainly clustered index,holds true for Non clustred as well) time spent in finding
the value will be more as query would have to search fragmented index to look for data, additional spaces will increase search time.
Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers
My TechNet Wiki Articles -
Fragmentation in tables and indexes
Can exist fragmentation in tables & indexes with ASM?
mexman,
ASM is a disk managment system that has no relationship to the concept of table/index fragmentation.
You probably mean ASSM.
ASSM affects some of the details of how object space is managed but table/index internal fragmentation could still exist though this is usually not an issue with or without ASSM depending on the DML pattern.
Research the archives on this topic as suggested if you want to know about detecting internal object fragmentation.
HTH -- Mark D Powell -- -
Dear all,
I am using the following command to find out the fragmented tables or indexes, which ofcource returns names of the tables and indexes which are fragmented.
select * from dba_segments where extents > 10 and owner like '%DB%'
output is 43 rows
Next I just rebuild certian indexes and still when I query the above sql statement I see the index which was rebuild is still present which means that the extents are stiil greater than 10.
I also tried by first dropping the index and creating it again. Still the index is fragmented.
What can I do.
Regards
SL1) The fact that an object has more than 10 extents is completely unrelated to whether it is fragmented or not.
2) Since you haven't specified the Oracle version or the type of tablespace, assuming you've got a recent version of Oracle and are using locally managed tablespaces, the number of extents allocated to an object is pretty much irrelevent, particularly if you've got automatic extent allocation.
3) If you are on a recent version of Oracle using LMT's, it is essentially impossible for objects to be fragmented for most any reasonable definition of "fragmented".
Justin -
Error while running the Oracle Text optimize index procedure (even as a dba user too)
Hi Experts,
I am on Oracle on 11.2.0.2 on Linux. I have implemented Oracle Text. My Oracle Text indexes are fragmented but I am getting an error while running the optimize_index error. Following is the error:
begin
ctx_ddl.optimize_index(idx_name=>'ACCESS_T1',optlevel=>'FULL');
end;
ERROR at line 1:
ORA-20000: Oracle Text error:
ORA-06512: at "CTXSYS.DRUE", line 160
ORA-06512: at "CTXSYS.CTX_DDL", line 941
ORA-06512: at line 1
Now I tried then to run this as DBA user too and it failed the same way!
begin
ctx_ddl.optimize_index(idx_name=>'BVSCH1.ACCESS_T1',optlevel=>'FULL');
end;
ERROR at line 1:
ORA-20000: Oracle Text error:
ORA-06512: at "CTXSYS.DRUE", line 160
ORA-06512: at "CTXSYS.CTX_DDL", line 941
ORA-06512: at line 1
Now CTXAPP role is granted to my schema and still I am getting this error. I will be thankful for the suggestions.
Also one other important observation: We have this issue ONLY in one database and in the other two databases, I don't see any problem at all.
I am unable to figure out what the issue is with this one database!
Thanks,
OrauserNHow about check the following?
Bug 10626728 - CTX_DDL.optimize_index "full" fails with an empty ORA-20000 since 11.2.0.2 upgrade (DOCID 10626728.8) -
SAP Table index size is greater than the size of the actual table
Hello Experts,
We are resolving an issue related to database performance. The present database size is 9 Terabytes. The analysis of response times through ST03N shows that the db time is 50% of the total response time. We are planning to reorganize the most updated tables (found from DB02old tx).
Here we see that the size of the index for a table is greater than the actual size of the table. Is this possible, if yes then how can we reorganize the index as it does not allow us to reorganize the index using brspace command.
Hope to hear from you soon, and if any additional activities you can suggest to improve the performance of the database will be appreciated.
Thank youHi Zaheer,
online redef may help you (for a little while) , but also check WHY the index became fragmented.
Improper settings can bring the index fragmented again and you have reoccuring reorg needs.
i.e.
check if PCT_INCREASE >0 if you are using Dictionary Managed Tablespaces or locally managed tablespaces that uses a "User" allocation policy. Set it to 0 to generate uniform next extents in the online reorg.
select
SEGMENT_NAME,
SEGMENT_TYPE,
round((NEXT_EXTENT*BLOCKS)/(EXTENTS*BYTES))*(BYTES/BLOCKS),
PCT_INCREASE
from
DBA_SEGMENTS
where
OWNER='SAPR3'
and
SEGMENT_TYPE in ('INDEX',
'TABLE')
and
PCT_INCREASE > 0
and segment_name in ('Yourtable','Yourindex')
In the following cases, it may be worthwhile to rebuild the index:
--> the percentage of the space used is bad - lower than 66%: PCT_USED
--> deleted leaf blocks represent more than 20% of total leaf blocks: DEL_LF_ROWS
--> the height of the tree is bigger than 3: HEIGHT or BLEVEL
select
name,
'----------------------------------------------------------' headsep,
'height '||to_char(height, '999,999,990') height,
'blocks '||to_char(blocks, '999,999,990') blocks,
'del_lf_rows '||to_char(del_lf_rows,'999,999,990') del_lf_rows,
'del_lf_rows_len '||to_char(del_lf_rows_len,'999,999,990') del_lf_rows_len,
'distinct_keys '||to_char(distinct_keys,'999,999,990') distinct_keys,
'most_repeated_key '||to_char(most_repeated_key,'999,999,990') most_repeated_key,
'btree_space '||to_char(btree_space,'999,999,990') btree_space,
'used_space '||to_char(used_space,'999,999,990') used_space,
'pct_used '||to_char(pct_used,'990') pct_used,
'rows_per_key '||to_char(rows_per_key,'999,999,990') rows_per_key,
'blks_gets_per_access '||to_char(blks_gets_per_access,'999,999,990') blks_gets_per_access,
'lf_rows '||to_char(lf_rows, '999,999,990')||' '||+
'br_rows '||to_char(br_rows, '999,999,990') br_rows,
'lf_blks '||to_char(lf_blks, '999,999,990')||' '||+
'br_blks '||to_char(br_blks, '999,999,990') br_blks,
'lf_rows_len '||to_char(lf_rows_len,'999,999,990')||' '||+
'br_rows_len '||to_char(br_rows_len,'999,999,990') br_rows_len,
'lf_blk_len '||to_char(lf_blk_len, '999,999,990')||' '||+
'br_blk_len '||to_char(br_blk_len, '999,999,990') br_blk_len
from
index_stats where index_name = 'yourindex'
bye
yk -
Rebuild Indexes question - What is better?
Is there an advantage of running this via a maintenance task in ConfigMgr. vs. SQL maintenace plan? What about the Fill Factor and other options?
hi,
well...that depends on your sql maintainance plan :)
afaik sccm rebuilds an index if fragmentation rate is above 30% and reorganizes it for any value below this threshold.
stevethompson provided some useful information on fragmentation on his blog and how to determine wheter the maintainance task does his job.
http://stevethompsonmvp.wordpress.com/2013/04/19/how-to-determine-if-the-configmgr-rebuild-indexes-site-maintenance-task-is-running/
kind regards -
Oracle Text Context index keeps growing. Optimize seems not to be working
Hi,
In my application I needed to search through many varchar columns from differents tables.
So I created a materialized view in which I concatenate those columns, since they exceed the 4000 characters I merged them concatenating the columns with the TO_CLOBS(column1) || TO_CLOB(column)... || TO_CLOB(columnN).
The query is complex, so the refresh is complete on demand for the view. We refresh it every 2 minutes.
The CONTEXT index is created with the sync on commit parameter.
The index then is synchronized every two minutes.
But when we run the optimize index it does not defrag the index. So it keeps growing.
Any idea ?
Thanks, and sorry for my poor english.
Edited by: detryo on 14-mar-2011 11:06What are you using to determine that the index is fragmented? Can you post a reproducible test case? Please see my test of what you described below, showing that the optimization does defragment the index.
SCOTT@orcl_11gR2> -- table:
SCOTT@orcl_11gR2> create table test_tab
2 (col1 varchar2 (10),
3 col2 varchar2 (10))
4 /
Table created.
SCOTT@orcl_11gR2> -- materialized view:
SCOTT@orcl_11gR2> create materialized view test_mv3
2 as
3 select to_clob (col1) || to_clob (col2) clob_col
4 from test_tab
5 /
Materialized view created.
SCOTT@orcl_11gR2> -- index with sync(on commit):
SCOTT@orcl_11gR2> create index test_idx
2 on test_mv3 (clob_col)
3 indextype is ctxsys.context
4 parameters ('sync (on commit)')
5 /
Index created.
SCOTT@orcl_11gR2> -- inserts, commits, refreshes:
SCOTT@orcl_11gR2> insert into test_tab values ('a', 'b')
2 /
1 row created.
SCOTT@orcl_11gR2> commit
2 /
Commit complete.
SCOTT@orcl_11gR2> exec dbms_mview.refresh ('TEST_MV3')
PL/SQL procedure successfully completed.
SCOTT@orcl_11gR2> insert into test_tab values ('c a', 'b d')
2 /
1 row created.
SCOTT@orcl_11gR2> commit
2 /
Commit complete.
SCOTT@orcl_11gR2> exec dbms_mview.refresh ('TEST_MV3')
PL/SQL procedure successfully completed.
SCOTT@orcl_11gR2> -- query works:
SCOTT@orcl_11gR2> select * from test_mv3
2 where contains (clob_col, 'ab') > 0
3 /
CLOB_COL
ab
c ab d
2 rows selected.
SCOTT@orcl_11gR2> -- fragmented index:
SCOTT@orcl_11gR2> column token_text format a15
SCOTT@orcl_11gR2> select token_text, token_first, token_last, token_count
2 from dr$test_idx$i
3 /
TOKEN_TEXT TOKEN_FIRST TOKEN_LAST TOKEN_COUNT
AB 1 1 1
AB 2 3 2
C 3 3 1
3 rows selected.
SCOTT@orcl_11gR2> -- optimizatino:
SCOTT@orcl_11gR2> exec ctx_ddl.optimize_index ('TEST_IDX', 'REBUILD')
PL/SQL procedure successfully completed.
SCOTT@orcl_11gR2> -- defragmented index after optimization:
SCOTT@orcl_11gR2> select token_text, token_first, token_last, token_count
2 from dr$test_idx$i
3 /
TOKEN_TEXT TOKEN_FIRST TOKEN_LAST TOKEN_COUNT
AB 2 3 2
C 3 3 1
2 rows selected.
SCOTT@orcl_11gR2> -
Indexorg/Re Indexing jobs in Primary server which has Logshipping
Hi All,
I have primary server(SQLServer 2008 R2) with 200 Database and 90 databases are configured in Log shipping(stand by).
We are planning for Indexing jobs on primary server.
Please some body advise us best practices to implement indexing jobs in primary server.
Thanks in advanceThere is few consideration
1. To rebuild index online or to rebuild offline. Online produces less logs than offline I proved this fact in below link. Online rebuild in Enterprise Only option
http://social.technet.microsoft.com/wiki/contents/articles/24420.curious-case-of-logging-in-online-and-offline-index-rebuild-in-full-recovery-model.aspx
Make sure your SQL server is patched to latest Service pack because Online index rebuild
MIGHT cause increase in database size after rebuild. Below is MS support article . This happens because extra 14 byte is added
http://support.microsoft.com/kb/2812884
2 You should only rebuild index whose fragmentation is > 40 and page_count is >1000 and reorganize if fragmentation is between 10 and 30 .
3. Are you using SORT_IN_TEMPDB option if yes make sure your tempdb has enough space if not make sure drive on which index resides has enough space.
4 Index rebuild is maintenance task so you also need to figure out how much time index rebuild would take. Ola script would surely be helpful if you use it.
5 Logs will be produced so you need to make sure just after rebuild when logs will be transferred to secondary the link must not be heavily utilized or else you will start receiving alerts from Log shipping
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
My TechNet Wiki Articles -
SQL azure database size not dropping down after deleting all table
Dear all,
I have a simple database on Azure for which I have deleted all table data. The size of the database is still showing 5Mb of data and I am charge for that. I have heard that this may happen from cluster index getting fragmented.
I have run a querry I found on internet on all my table index to show percentage of fragmentation and all report 0%.
DBA is not so my job but what could it be or how can I reduce that size ?
ON premise I would use COMPACT DB but not available in azure like some others DB action
Thnaks for tips
regardsuser created objects/data are not the only ones stored in your database. you got system objects and metadata as Mike mentions above.
are you trying to skip being charged if you're not storing data? looking at the pricing table, you'll still get charged the $4.995 for the 0-100MB database size range. -
Search in Sharepoint Foundation 2013 sp1 do not work
When we type anything in any search bar, the result is “Sorry, something went wrong” with Correlation ID:
aa0fb79c-5878-d0b4-2016-c1ebc17f3336.
The server was originally installed with the 2013 Foundation beta version on a 2008 R2 server (single server installation). The content was migrated from a MOSS 2007 installation using “Quest Migration Suite for Sharepoint. The search
feature has never worked.
We upgraded to Sp1 (version 15.0.4571.1502) yesterday. The search did still not work.
We checked the crawl status. It had been running for some 9000 hours. We stopped it and started it again. It was still running after 24 hours so we stopped it again.
Under Monitoring / Review Problems and solutions a new problem appeared indicating that the index was fragmented. We hit the button above “Solve the issue” (or what it is called) and the error went away.
To be sure we also went to “Search Service Application: Index Reset” and hit reset. After a while of “Working on it”, a page with the text “Sorry, something went wrong”
appear with error message as below:
There was no endpoint listening at net.tcp://sp-server/8E2ECE/AdminComponent1/Management/NodeController that could accept the message. This is often caused by an incorrect address or SOAP
action. See InnerException, if present, for more details.
So we are really not sure wether the index was correctly reset.
Going back to the crawl overview, a new crawl had been started. Having a look under Search
Service Application: Crawl Reports - Crawl Rate, statistics as below:
Crawl rate (dps) 0
Total items 34
Modified items 0
Not modified items 0
Security items 0
Deleted items 0
Retries 28
Errors 6
The crawl appears to be hanging somewhere here.
What actions should I take?Hi Jonas,
I recommend to clear the configuration cache to see if the issue still occurs.
Please follow the link to clear the cache:
http://blogs.msdn.com/b/jamesway/archive/2011/05/23/sharepoint-2010-clearing-the-configuration-cache.aspx
To narrow the issue scope, I recommend to create a new search service application to see if the issue still occurs.
Best regards.
Thanks
Victoria Xia
TechNet Community Support -
Why named parameter can't be used multiple times in PL/SQL block in JDBC
with the following PL/SQL block, when I run int in JDBC, I get an error,
it says, The number of parameter names does not match the number of registered parameters.
if all named parameters are used only once, then my program works fine.
My old program uses Oracle Forms to run the attached PL/SQL block correctly, I just want to run them in JDBC without more efforts, I don't want to rewrite all PL/SQL blocks.
Does oracle driver support this case? why the PL/SQL block can work in Oracle Forms but failed in JDBC?
Can we have an another solutions to avoid rewriting the PL/SQL block to stored procedure?
if I use following SQL:
BEGIN if :q is null then :q := 'X'; else :q := 'Y'; end if; END;
, Using java program:
import java.sql.*; public class RunPLSQLBlock { public static void main(String s[]) throws SQLException { String URL = "jdbc:oracle:thin:@192.168.11.199:1521:TIBSTEST"; Connection con = null; try { Class.forName("oracle.jdbc.driver.OracleDriver"); con = (Connection) DriverManager.getConnection(URL, "FBP1DEV", "FBP1DEV"); String SQL = "BEGIN if :q is null then :q := 'X'; else :q := 'Y'; end if; END;"; CallableStatement stmt = con.prepareCall(SQL); stmt.registerOutParameter("q", Types.VARCHAR); stmt.setString("q", "A"); stmt.execute(); } catch (Exception e) { e.printStackTrace(); } finally { if (con != null) { con.close(); } } } }
in the coding, only "q" registered, I got:
java.sql.SQLException: The number of parameter names does not match the number of registered praremeters at oracle.jdbc.driver.OracleSql.setNamedParameters(OracleSql.java:314) at oracle.jdbc.driver.OracleCallableStatement.execute(OracleCallableStatement.java:10096) at oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:5693) at RunPLSQLBlock.main(RunPLSQLBlock.java:28)
now, tried to register 3 indexes, changed fragments are below.
import java.sql.*; public class RunPLSQLBlock { public static void main(String s[]) throws SQLException { String URL = "jdbc:oracle:thin:@192.168.11.199:1521:TIBSTEST"; Connection con = null; try { Class.forName("oracle.jdbc.driver.OracleDriver"); con = (Connection) DriverManager.getConnection(URL, "FBP1DEV", "FBP1DEV"); String SQL = "BEGIN if :q is null then :q := 'X'; else :q := 'Y'; end if; END;"; CallableStatement stmt = con.prepareCall(SQL); stmt.registerOutParameter(1, Types.VARCHAR); stmt.registerOutParameter(2, Types.VARCHAR); stmt.registerOutParameter(3, Types.VARCHAR); stmt.setString(1, "A"); stmt.execute(); } catch (Exception e) { e.printStackTrace(); } finally { if (con != null) { con.close(); } } } }
now error changed to:
java.sql.SQLException: ORA-01006: bind variable does not exist at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:457) at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:400) at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:926) at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:476) at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:200) at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:543) at oracle.jdbc.driver.T4CCallableStatement.doOall8(T4CCallableStatement.java:208) at oracle.jdbc.driver.T4CCallableStatement.executeForRows(T4CCallableStatement.java:1416) at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1757) at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:4372) at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:4595) at oracle.jdbc.driver.OracleCallableStatement.execute(OracleCallableStatement.java:10100) at oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:5693) at RunPLSQLBlock.main(RunPLSQLBlock.java:26)
, now tried register only 1 position like below,
CallableStatement stmt = con.prepareCall(SQL); stmt.registerOutParameter(1, Types.VARCHAR); stmt.setString(1, "A"); stmt.execute();
, it says:
java.sql.SQLException: Missing IN or OUT parameter at index:: 2 at oracle.jdbc.driver.OraclePreparedStatement.processCompletedBindRow(OraclePreparedStatement.java:2177) at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:4356) at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:4595) at oracle.jdbc.driver.OracleCallableStatement.execute(OracleCallableStatement.java:10100) at oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:5693) at RunPLSQLBlock.main(RunPLSQLBlock.java:26)
, now let try a OK case, which use all named parameters only once. coding like below, SQL and Java listed below.
BEGIN if :q is null then :r := 'X'; else :s := 'Y'; end if; EXCEPTION WHEN NO_DATA_FOUND THEN NULL; END;
import java.sql.*; public class RunPLSQLBlock { public static void main(String s[]) throws SQLException { String URL = "jdbc:oracle:thin:@192.168.11.199:1521:TIBSTEST"; Connection con = null; try { Class.forName("oracle.jdbc.driver.OracleDriver"); con = (Connection) DriverManager.getConnection(URL, "FBP1DEV", "FBP1DEV"); String SQL = "BEGIN if :q is null then :r := 'X'; else :s := 'Y'; end if; END;"; CallableStatement stmt = con.prepareCall(SQL); stmt.registerOutParameter("q", Types.VARCHAR); stmt.registerOutParameter("r", Types.VARCHAR); stmt.registerOutParameter("s", Types.VARCHAR); stmt.setString("q", "A"); stmt.execute(); System.out.println("Q :" + stmt.getString("q")); System.out.println("R :" + stmt.getString("r")); System.out.println("S :" + stmt.getString("s")); } catch (Exception e) { e.printStackTrace(); } finally { if (con != null) { con.close(); } } } }
, the case give us the following output:
Q :A R :null S :Y
2nd part, I also tried another scheme, to use 'execute immediate', test code attached below, it also have errors.
begin execute immediate 'begin if :q is null then :q := ''X''; else :q := ''Y''; :r := ''Z''; end if; end;' using in out :q, out :r; end;
, Java Code:
import java.sql.*; public class RunDynamicSQL { public static void main(String s[]) throws SQLException { String URL = "jdbc:oracle:thin:@192.168.11.199:1521:TIBSTEST"; Connection con = null; try { Class.forName("oracle.jdbc.driver.OracleDriver"); con = (Connection) DriverManager.getConnection(URL, "FBP1DEV", "FBP1DEV"); String SQL ="begin execute immediate 'begin if :q is null then :q := ''X''; else :q := ''Y''; :r := ''Z''; end if; end;' using in out :q, out :r; end;"; CallableStatement stmt = con.prepareCall(SQL); stmt.registerOutParameter("q", Types.VARCHAR); stmt.registerOutParameter("r", Types.VARCHAR); stmt.setString("q", "A"); stmt.execute(); System.out.println("Q :" + stmt.getString("q")); System.out.println("R :" + stmt.getString("r")); } catch (Exception e) { e.printStackTrace(); } finally { if (con != null) { con.close(); } } } }
, the output is, we can find when parameter 'q' is IN OUT mode, we can't get its final value:
Q :null R :Z
, now I tried my workaround, it works fine by using a temporary variable, now my named parameter is split to 2 roles, one is for IN, another is for OUT, now I can get final out value.
declare q clob; r clob; begin q := ?; r := ?; execute immediate 'begin if :q is null then :q := ''X''; else :q := ''Y''; :r := ''Z''; end if; end;' using in out q, out r; ? := q; ? := r; end;
, my test java code,
import java.sql.*; public class RunDynamicSQL { public static void main(String s[]) throws SQLException { String URL = "jdbc:oracle:thin:@192.168.11.199:1521:TIBSTEST"; Connection con = null; try { Class.forName("oracle.jdbc.driver.OracleDriver"); con = (Connection) DriverManager.getConnection(URL, "FBP1DEV", "FBP1DEV"); String SQL ="declare q clob;r clob; begin q := ?; r := ?; execute immediate 'begin if :q is null then :q := ''X''; else :q := ''Y''; :r := ''Z''; end if; end;' using in out q, out r; ? := q; ? := r; end;"; CallableStatement stmt = con.prepareCall(SQL); stmt.registerOutParameter(3, Types.VARCHAR); stmt.registerOutParameter(4, Types.VARCHAR); stmt.setString(1, "A"); stmt.setString(2, "A"); stmt.execute(); System.out.println("Q :" + stmt.getString(3)); System.out.println("R :" + stmt.getString(4)); } catch (Exception e) { e.printStackTrace(); } finally { if (con != null) { con.close(); } } } }
, the output is expected,
Q :Y R :Z
Database:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
JDBC Driver, extracted from ojdbc6_g.jar/META-INF/MANIFEST.MF :
Created-By: 1.5.0_30-b03 (Sun Microsystems Inc.)
Implementation-Vendor: Oracle Corporation
Implementation-Title: JDBC debug
Implementation-Version: 11.2.0.3.0
Repository-Id: JAVAVM_11.2.0.3.0_LINUX_110823
Specification-Vendor: Sun Microsystems Inc.
Specification-Title: JDBC
Specification-Version: 4.0
Main-Class: oracle.jdbc.OracleDriver
JDK:
java version "1.7.0"
Java(TM) SE Runtime Environment (build 1.7.0-b147)
Java HotSpot(TM) Client VM (build 21.0-b17, mixed mode, sharing)
Edited by: jamxval on 2013-3-22 2:01PM (UTC+08:00), Give full test java program and SQL, added environment/API level; Attached another problem.
Edited by: jamxval on 2013-3-26 17:57 (UTC +08), Adjust code styleHi, thanks for your response, now I see, the named parameter is for stored procedure only, for PL/SQL block we name it placeholder name.
After cast my java.sql.CallableStatement to oracle.jdbc.OracleCallableStatement, I can find setStringAtName,
now, I have only one question:I can't find corresponding methods for registerOutputParameter, how we fetch output value?
I tried to callableStatement.getString("q"); it reports errors, but there are no ordinal binding in my source code, does placeholder names doesn't support OUT mode?
Java:
CallableStatement stmt = con.prepareCall("BEGIN if :q is null then :r := 'X'; else :s := 'Y'; end if; END;");
oracle.jdbc.OracleCallableStatement call = (oracle.jdbc.OracleCallableStatement) stmt;
call.registerOutParameter("q", Types.VARCHAR);
call.registerOutParameter("r", Types.VARCHAR);
call.registerOutParameter("s", Types.VARCHAR);
call.setStringAtName("q", "A");
call.setStringAtName("r", "A");
call.setStringAtName("s", "A");
call.execute();
System.out.println("Q :" + call.getString("q"));
</Java>
<output>
java.sql.SQLException: 不允许的操作: Ordinal binding and Named binding cannot be combined!
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:146)
at oracle.jdbc.driver.OracleCallableStatement.getString(OracleCallableStatement.java:2834)
at RunPLSQLBlock.main(RunPLSQLBlock.java:33)
</output>by the way, in my below-mentioned SQL 'problematic', when my code uses 'execute immediate' and use placeholder names in IN OUT mode, we always get NULL value (i.e. ':q'), but we can get final value of ':r' when ':r' is OUT mode only; now I get a workaround attached in below-mentioned 'my workaround' block, which split the IN OUT roles to 2 parts, it can work now;
It seems that the difference between 'problematic' and 'my workaround' imply that there are something work unexpectedly when the driver process the placeholder names, because 'my workaround' and ':r in problematic case' make sure the 'execute immediate' returned output values correctly, unluckly driver layer can't get return values.
<SQL name = 'problematic'>
begin
execute immediate 'begin if :q is null then :q := ''X''; else :q := ''Y''; :r := ''Z''; end if; end;'
using in out :q, out :r;
end;
</SQL>
<SQL name='my workaround'>
declare
q clob;
r clob;
begin
q := ?;
r := ?;
execute immediate 'begin if :q is null then :q := ''X''; else :q := ''Y''; :r := ''Z''; end if; end;' using in out q, out r;
? := q;
? := r;
end;Edited by: EJP on 26/03/2013 14:14 -
Most of the time queries remain in suspended state in our production environment.
Most of the time queries remain in suspended state in our production environment.
Not getting any solution.
ThanksWait typeS : PAGEIOLATCH_SH,PAGELATCH_EX is when requested page is in buffer and some other process/request
is holding the lock in buffer.
SQL will manage it internally , Have you faced in performance issue if yes the need to check
1) Fragmentation of indexes. if fragmented the defrag it so that time of execution will less and will also reduce wait time.
2) Any missing indexes is there then create it
3) Update statistics status of tables.
4) Check if any table scan is there the create appropriate indexes on that...etc ....
Maybe you are looking for
-
How track my ipod from the serial number
i need help i have the serial number
-
Add custom field in SOCO search
hi i checked in webdynpro component of SOCO search screen i got below info Structure /SAPSRM/S_CLL_WKL_SEARCH Method /SAPSRM/IF_CLL_DODM_WKL_SEARCH~SEARCH View V_WKL_SEARCH component /SAPSRM/WDC_DODC_WKL_SRCH now there is alread
-
How can i save a incomplete download
when i want to turnoff my computer, but some other down loading going on computer. i wants to download rest of data before i have paused at the time of turnoff computer.
-
Hi all, I created a schema for users to do their performance test, now I'm required to stop the particular schema as the performance test is completed. How do I stop a schema? Pls help
-
Job schedule: How to process huge ammount of IDocs ?
HI, we´got a huge ammount (1000-5000 IDocs) of one IDoctype. I need a Job with a maximum of 40 IDocs in each job. The job looks like: 1. Step RBDAPP01 with 40 IDocs 2. Step own report Is there any way not to create a new variant for each "40 IDoc gro