SQL SELECT in bind operation
Hi,
Here is a snippet of some sample code. The commented
query works and I recieve my data. The setString sets
the string but the query fails. Am I missing something.
Bear in mind that the connection is fine etc. Just that
the commented select statement works. But the one
requiring the param does not!
try
//string request("select REF(MSNNr) from secure_data MSNNr where MSNNR_M='SSZ0000001'");
string request("select REF(MSNNr) from secure_data MSNNr where MSNNr_m = :idnumber");
OCCI_STD_NAMESPACE::string str("SSZ0000001");
query->setString( 1, str);
cout << query->getString(1) << endl;
cout << query->getSQL() << endl;
ResultSet *result = query->executeQuery();
if(result->next())
Ref<MSNNr> msnid = result->getRef(1);
cout << "found something" << endl;
catch(SQLException ex)
cout << "exception thrown : " << ex.what() << endl;
Many Thanks,
Raffaele
That is more straightforward, but DachshundThor's
suggestion (above) implicitly ties the tables
together in the same way.Would that not produce a cartesian join?Yes, it would start with the Cartesian product, but
then the WHERE-clauses would cut that down to what an
ordinary join would have produced. You are just describing the way ordinary relational databases do joins (conceptually): start with the full Cartesian product and whittle it down. That is just conceptual, of course, it would never work if you actually implemented that way.
The correct select statement (since authors are not assigned ISBNs) is probably something like
select * from author a, book b where b.authorid = a.id
Similar Messages
-
Oracle's SQL select should have a syntax like this
Oracles SQL is very powerful but lacks a basic thing the LIMIT and OFFSET syntax which
is present in postgres sql, I recommend oracle to support this syntax in future release.
more details of this syntax can be found at :
http://www.postgresql.org/idocs/index.php?sql-select.html
SELECT [ ALL | DISTINCT [ ON ( expression [, ...] ) ] ]
* | expression [ AS output_name ] [, ...]
[ FROM from_item [, ...] ]
[ WHERE condition ]
[ GROUP BY expression [, ...] ]
[ HAVING condition [, ...] ]
[ { UNION | INTERSECT | EXCEPT } [ ALL ] select ]
[ ORDER BY expression [ ASC | DESC | USING operator ] [, ...] ]
[ FOR UPDATE [ OF tablename [, ...] ] ]
[ LIMIT { count | ALL } ]
[ OFFSET start ]I've executed the above queries against my table and the results follows :
The structure of the table
Name Null? Type
FATWAID NOT NULL NUMBER(11)
FATWASTATUS NUMBER(1)
SUBJ_NO NUMBER(4)
LANG CHAR(1)
SCHOLAR NUMBER(2)
REVIEWER NUMBER(2)
AUDITOR NUMBER(2)
PUBLISHER NUMBER(2)
SUBSCRIBER NUMBER(2)
ENTRY NUMBER(2)
FATWATITLE VARCHAR2(100)
FATWATEXT CLOB
FATWADATE DATE
TRANSLATE CHAR(1)
SELECTED NUMBER(1)
SUBJ_MAIN NUMBER(4)
I have set timing on and executed each query above, following are the statistics.
First the number of rows in the table.
SQL>set timing on
SQL>select count (*) from fatwa;
COUNT(*)
1179651
Elapsed: 00:00:13.05
The query of Andrew Clarke
SQL>SELECT fatwaid FROM (SELECT fatwaid, rownum as ranking FROM fatwa) r
WHERE r.ranking BETWEEN &OFFSET AND &LIMIT
SQL>/
Enter value for offset: 100000
Enter value for limit: 100009
old 2: WHERE r.ranking BETWEEN &OFFSET AND &LIMIT
new 2: WHERE r.ranking BETWEEN 100000 AND 100009
FATWAID
96592
96593
96594
96595
96596
96597
96598
96599
96600
96601
10 rows selected.
Elapsed: 00:00:12.02
SQL> /
Enter value for offset: 1000000
Enter value for limit: 1000009
old 2: WHERE r.ranking BETWEEN &OFFSET AND &LIMIT
new 2: WHERE r.ranking BETWEEN 1000000 AND 1000009
FATWAID
994621
994622
994623
994624
994625
994626
995769
995770
995771
995772
10 rows selected.
Elapsed: 00:00:12.00
The response time is decreasing because of use of bind variables,
but 12 seconds is a sign of poor performance.
Now a slight modification to Clarke's query
I will add order by clause
SQL> ed
Wrote file afiedt.buf
1 SELECT fatwaid FROM (SELECT fatwaid, rownum as ranking FROM fatwa ORDER BY fatwaid) r
2* WHERE r.ranking BETWEEN &OFFSET AND &LIMIT
SQL> /
Enter value for offset: 100001
Enter value for limit: 100010
old 2: WHERE r.ranking BETWEEN &OFFSET AND &LIMIT
new 2: WHERE r.ranking BETWEEN 100001 AND 100010
FATWAID
100032
100033
100034
100035
100036
100037
100038
100039
100040
100041
10 rows selected.
Elapsed: 00:00:04.00 -- time reduced from 12 to 4 seconds
A time of 4 seconds is acceptable but not good,
response time should be in milli seconds.
SQL> /
Enter value for offset: 1000001
Enter value for limit: 1000010
old 2: WHERE r.ranking BETWEEN &OFFSET AND &LIMIT
new 2: WHERE r.ranking BETWEEN 1000001 AND 1000010
FATWAID
1000032
1000033
1000034
1000035
1000036
1000037
1000038
1000039
1000040
1000041
10 rows selected.
Elapsed: 00:00:03.09 -- this reduction is because of bind variables
The query of Chris Gates
SQL>select fatwaid
2 from ( select a.*, rownum r
3 from ( select *
4 from fatwa
5 --where x = :host_variable
6 order by fatwaid ) a
7 where rownum < &HigerBound )
8 where r > &LowerBound
SQL> /
Enter value for higerbound: 100011
old 7: where rownum < &HigerBound )
new 7: where rownum < 100011 )
Enter value for lowerbound: 100000
old 8: where r > &LowerBound
new 8: where r > 100000
FATWAID
100032
100033
100034
100035
100036
100037
100038
100039
100040
100041
10 rows selected.
Elapsed: 00:00:02.04
This seems to be fast
SQL> /
Enter value for higerbound: 1000011
old 7: where rownum < &HigerBound )
new 7: where rownum < 1000011 )
Enter value for lowerbound: 1000000
old 8: where r > &LowerBound
new 8: where r > 1000000
FATWAID
1000032
1000033
1000034
1000035
1000036
1000037
1000038
1000039
1000040
1000041
10 rows selected.
Elapsed: 00:01:14.02
but this is worst when upper bound is 1 million.
Finally Myers query
SQL> select fatwaid from
2 (select /*+ INDEX(fawtaid pk_fatwa) */ fatwaid, rownum x from fatwa
3 where rownum < &UpperBound )
4 where x > &LowerBound;
Enter value for upperbound: 100011
old 3: where rownum < &UpperBound )
new 3: where rownum < 100011 )
Enter value for lowerbound: 100000
old 4: where x > &LowerBound
new 4: where x > 100000
FATWAID
122418
122419
122420
122421
122422
122423
122424
122425
122426
122427
10 rows selected.
Elapsed: 00:00:00.03 -- too fast
SQL> /
Enter value for upperbound: 1000011
old 3: where rownum < &UpperBound )
new 3: where rownum < 1000011 )
Enter value for lowerbound: 1000000
old 4: where x > &LowerBound
new 4: where x > 1000000
FATWAID
984211
984212
984213
984214
984215
984216
984217
984218
984219
984220
10 rows selected.
Elapsed: 00:00:02.02 -- with 1 million rows also satisfactory but it is not is milliseconds
The same query after using order by clause
SQL> select fatwaid from
2 (select /*+ INDEX(fawtaid pk_fatwa) */ fatwaid, rownum x from fatwa
3 where rownum < &UpperBound ORDER BY fatwaid)
4 where x > &LowerBound;
Enter value for upperbound: 100011
old 3: where rownum < &UpperBound ORDER BY fatwaid)
new 3: where rownum < 100011 ORDER BY fatwaid)
Enter value for lowerbound: 100000
old 4: where x > &LowerBound
new 4: where x > 100000
FATWAID
100032
100033
100034
100035
100036
100037
100038
100039
100040
100041
10 rows selected.
Elapsed: 00:00:00.06
SQL> /
Enter value for upperbound: 1000011
old 3: where rownum < &UpperBound ORDER BY fatwaid)
new 3: where rownum < 1000011 ORDER BY fatwaid)
Enter value for lowerbound: 1000000
old 4: where x > &LowerBound
new 4: where x > 1000000
FATWAID
1000032
1000033
1000034
1000035
1000036
1000037
1000038
1000039
1000040
1000041
10 rows selected.
Elapsed: 00:00:07.03 -- slow
SQL> /
Enter value for upperbound: 1000011
old 3: where rownum < &UpperBound ORDER BY fatwaid)
new 3: where rownum < 1000011 ORDER BY fatwaid)
Enter value for lowerbound: 1000000
old 4: where x > &LowerBound
new 4: where x > 1000000
FATWAID
1000032
1000033
1000034
1000035
1000036
1000037
1000038
1000039
1000040
1000041
10 rows selected.
Elapsed: 00:00:00.06
when I execute the same query again it is bringing records from
the SGA so it is very fast
Now which one to choose from ?
Andrew and Myers queries are good and currently I am using
Myers query.
There should be some technique to do this in the most efficient way.
any input is appreciated. -
Bind Operator to Function with optional parameter
Hi,
as mentioned in the subject, I would like to create an operator which is bind to an function with an optional parameter:
CREATE OR REPLACE FUNCTION
TS_Base_Func(iobject IN CIBase, format IN VARCHAR2 DEFAULT NULL) RETURN VARCHAR2 IS
BEGIN
RETURN interval_object.IntervalToString(format);
END TS_Base_Func;
I can bind the operator with a VARCHAR2 as second parameter, but how can I bind an operator without the second parameter to this function?
Thanks!What about using a "wrapper" function to implement what you would like to do? Here is a small sample:
SQL> CREATE OR REPLACE FUNCTION TEST
2 (
3 A IN VARCHAR2
4 , B IN VARCHAR2 DEFAULT NULL
5 )
6 RETURN NUMBER
7 AS
8 BEGIN
9 IF A = B THEN
10 RETURN 1;
11 ELSE
12 RETURN 0;
13 END IF;
14 END;
15 /
Function created.
SQL> CREATE OR REPLACE FUNCTION TEST_WRAPPER
2 (
3 A IN VARCHAR2
4 )
5 RETURN NUMBER
6 AS
7 BEGIN
8 RETURN TEST(A);
9 END;
10 /
Function created.
SQL> CREATE OR REPLACE OPERATOR TestOperator
2 BINDING (VARCHAR2) RETURN NUMBER USING TEST_WRAPPER
3 , (VARCHAR2, VARCHAR2) RETURN NUMBER USING TEST;
Operator created.
SQL> SELECT TestOperator(1) FROM DUAL;
TESTOPERATOR(1)
0
SQL> SELECT TestOperator(1,2) FROM DUAL;
TESTOPERATOR(1,2)
0
SQL> SELECT TestOperator(1,1) FROM DUAL;
TESTOPERATOR(1,1)
1
SQL>The TEST_WRAPPER function has the signature you need for a one variable parameter. However, under the hood it calls the TEST function. -
Rewrite sql to avoid filter operation
Hi All,
I found below sql and some more sql's causing high CPU usage.
SELECT :B1 AS ID ,
DECODE((SELECT 1
FROM DUAL
WHERE EXISTS (SELECT NULL
FROM ONS
WHERE PARENT_ID = :B1 )), 1, 1, 0) AS IP_RELATION ,
DECODE((SELECT 1
FROM DUAL
WHERE EXISTS (SELECT NULL
FROM ONS
WHERE ULTIMATE_PARENT_GID = :B1 )), 1, 1, 0) AS UP_RELATION ,
DECODE((SELECT 1
FROM DUAL
WHERE EXISTS (SELECT NULL FROM AFFILIATIONS WHERE AFFILIATED_ID= :B1 )), 1, 1, 0) AS AFF_RELATION ,
DECODE((SELECT 1
FROM DUAL
WHERE EXISTS (SELECT NULL FROM JOINT_VENTURES WHERE JOINT_VENTURE_ID= :B1 )), 1, 1, 0) AS JV_RELATION ,
DECODE((SELECT 1
FROM DUAL
WHERE EXISTS (SELECT NULL FROM SUCCESSORS WHERE SUCCESSOR_ID= :B1 )), 1, 1, 0) AS SUC_RELATION ,
DECODE((SELECT 1
FROM DUAL
WHERE EXISTS (SELECT NULL FROM COUNTERPARTY WHERE CP_TAX_AUTHORITY_ID = :B1 )), 1, 1, 0) AS TAX_AUTH_RELATION ,
DECODE((SELECT 1
FROM DUAL
WHERE EXISTS (SELECT NULL FROM COUNTERPARTY WHERE CP_PRIM_REGULATOR_ID = :B1 )), 1, 1, 0) AS PRIM_REG_RELATION ,
DECODE((SELECT 1
FROM DUAL
WHERE EXISTS (SELECT NULL FROM ONS WHERE DUPLICATE_OF_ID = :B1 )), 1, 1, 0) AS DUP_RELATION ,
DECODE((SELECT 1
FROM DUAL
WHERE EXISTS (SELECT NULL FROM ONS WHERE REG_AUTHORITY_ID = :B1 )), 1, 1, 0) AS REG_AUTH_RELATION
FROM DUAL
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | | | 2 (100)| |
|* 1 | FILTER | | | | | |
| 2 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | IDX_IMMEDIATE_PARENT_ID | 1 | 3 | 2 (0)| 00:00:01 |
|* 4 | FILTER | | | | | |
| 5 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
|* 6 | INDEX RANGE SCAN | IDX_ULTIMATE_PARENT_ID | 2 | 4 | 2 (0)| 00:00:01 |
|* 7 | FILTER | | | | | |
| 8 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
|* 9 | INDEX FAST FULL SCAN| PK_ORG_AFFILIATED_WITH | 1 | 7 | 294 (7)| 00:00:04 |
|* 10 | FILTER | | | | | |
| 11 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
|* 12 | INDEX FULL SCAN | PK_ORG_JOINT_VENTURE_OF | 1 | 7 | 3 (0)| 00:00:01 |
|* 13 | FILTER | | | | | |
| 14 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
|* 15 | INDEX FAST FULL SCAN| PK_ONS_SUCCEEDED_BY | 1 | 7 | 79 (7)| 00:00:01 |
|* 16 | FILTER | | | | | |
| 17 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
|* 18 | INDEX RANGE SCAN | IDX_ORG_CP_TAX_AUTHORITY_ID | 2 | 14 | 2 (0)| 00:00:01 |
|* 19 | FILTER | | | | | |
| 20 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
|* 21 | INDEX RANGE SCAN | IDX_ORGCP_PRIM_REGULATOR_ID | 1 | 4 | 2 (0)| 00:00:01 |
|* 22 | FILTER | | | | | |
| 23 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
|* 24 | TABLE ACCESS FULL | ONS | 1 | 2 | 27013 (4)| 00:05:25 |
|* 25 | FILTER | | | | | |
| 26 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
|* 27 | TABLE ACCESS FULL | ONS | 1 | 2 | 475 (3)| 00:00:06 |
| 28 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
Peeked Binds (identified by position):
2 - :B1 (NUMBER, Primary=1)
3 - :B1 (NUMBER, Primary=1)
4 - :B1 (NUMBER, Primary=1)
5 - :B1 (NUMBER, Primary=1)
6 - :B1 (NUMBER, Primary=1)
7 - :B1 (NUMBER, Primary=1)
8 - :B1 (NUMBER, Primary=1)
9 - :B1 (NUMBER, Primary=1)
10 - :B1 (NUMBER, Primary=1)
Predicate Information (identified by operation id):
1 - filter( IS NOT NULL)
3 - access("IMMEDIATE_PARENT_ID"=:B1)
4 - filter( IS NOT NULL)
6 - access("ULTIMATE_PARENT_ID"=:B1)
7 - filter( IS NOT NULL)
9 - filter("AFFILIATED_ID"=:B1)
10 - filter( IS NOT NULL)
12 - access("JOINT_VENTURE_ID"=:B1)
filter("JOINT_VENTURE_ID"=:B1)
13 - filter( IS NOT NULL)
15 - filter("SUCCESSOR_ID"=:B1)
16 - filter( IS NOT NULL)
18 - access("CP_TAX_AUTHORITY_ID"=:B1)
19 - filter( IS NOT NULL)
21 - access("CP_PRIM_REGULATOR_ID"=:B1)
22 - filter( IS NOT NULL)
24 - filter("DUPLICATE_OF_ID"=:B1)
25 - filter( IS NOT NULL)
27 - filter("REG_AUTHORITY_ID"=:B1)Oracle Version : 10.2.0.4 RAC 2 nodes
Is there any possibility to rewrite this sql to avoid filter operation.
Please let me know if you need any more details....My bad..i overlooked the execution plan.
Below execution plan has been extracted from devlopment database which is exact replica of production database.
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads |
|* 1 | FILTER | | 1 | | 1 |00:00:00.72 | 8028 | 5986 |
| 2 | FAST DUAL | | 1 | 1 | 1 |00:00:00.01 | 0 | 0 |
| 3 | PARTITION RANGE ALL | | 1 | 1 | 1 |00:00:00.72 | 8028 | 5986 |
|* 4 | TABLE ACCESS FULL | ONS | 1 | 1 | 1 |00:00:00.72 | 8028 | 5986 |
|* 5 | FILTER | | 1 | | 1 |00:00:00.19 | 7 | 0 |
| 6 | FAST DUAL | | 1 | 1 | 1 |00:00:00.01 | 0 | 0 |
| 7 | PX COORDINATOR | | 1 | | 1 |00:00:00.19 | 7 | 0 |
| 8 | PX SEND QC (RANDOM) | :TQ10000 | 0 | 1 | 0 |00:00:00.01 | 0 | 0 |
| 9 | PX PARTITION RANGE ALL| | 0 | 1 | 0 |00:00:00.01 | 0 | 0 |
|* 10 | INDEX RANGE SCAN | IDX_ULTIMATE_PARENT_ID | 0 | 1 | 0 |00:00:00.01 | 0 | 0 |
|* 11 | FILTER | | 1 | | 0 |00:00:00.11 | 1231 | 0 |
| 12 | FAST DUAL | | 0 | 1 | 0 |00:00:00.01 | 0 | 0 |
|* 13 | INDEX FAST FULL SCAN | PK_ORG_AFFILIATED_WITH | 1 | 1 | 0 |00:00:00.11 | 1231 | 0 |
|* 14 | FILTER | | 1 | | 0 |00:00:00.01 | 7 | 0 |
| 15 | FAST DUAL | | 0 | 1 | 0 |00:00:00.01 | 0 | 0 |
|* 16 | INDEX FAST FULL SCAN | PK_ORG_JOINT_VENTURE_OF | 1 | 1 | 0 |00:00:00.01 | 7 | 0 |
|* 17 | FILTER | | 1 | | 0 |00:00:00.02 | 229 | 0 |
| 18 | FAST DUAL | | 0 | 1 | 0 |00:00:00.01 | 0 | 0 |
|* 19 | INDEX FAST FULL SCAN | PK_ONS_SUCCEEDED_BY | 1 | 1 | 0 |00:00:00.02 | 229 | 0 |
|* 20 | FILTER | | 1 | | 1 |00:00:00.01 | 3 | 0 |
| 21 | FAST DUAL | | 1 | 1 | 1 |00:00:00.01 | 0 | 0 |
|* 22 | INDEX RANGE SCAN | IDX_CP_TAX_AUTHORITY_ID | 1 | 2 | 1 |00:00:00.01 | 3 | 0 |
|* 23 | FILTER | | 1 | | 1 |00:00:00.01 | 3 | 0 |
| 24 | FAST DUAL | | 1 | 1 | 1 |00:00:00.01 | 0 | 0 |
|* 25 | INDEX RANGE SCAN | IDX_CP_PRIM_REGULATOR_ID | 1 | 1 | 1 |00:00:00.01 | 3 | 0 |
|* 26 | FILTER | | 1 | | 1 |00:00:02.20 | 28923 | 21562 |
| 27 | FAST DUAL | | 1 | 1 | 1 |00:00:00.01 | 0 | 0 |
| 28 | PARTITION RANGE ALL | | 1 | 1 | 1 |00:00:02.20 | 28923 | 21562 |
|* 29 | TABLE ACCESS FULL | ONS | 1 | 1 | 1 |00:00:02.20 | 28923 | 21562 |
|* 30 | FILTER | | 1 | | 1 |00:00:00.01 | 4 | 5 |
| 31 | FAST DUAL | | 1 | 1 | 1 |00:00:00.01 | 0 | 0 |
| 32 | PARTITION RANGE ALL | | 1 | 1 | 1 |00:00:00.01 | 4 | 5 |
|* 33 | TABLE ACCESS FULL | ONS | 1 | 1 | 1 |00:00:00.01 | 4 | 5 |
| 34 | FAST DUAL | | 1 | 1 | 1 |00:00:00.01 | 0 | 0 |
Predicate Information (identified by operation id):
1 - filter( IS NOT NULL)
4 - filter("IMMEDIATE_PARENT_ID"=:B1)
5 - filter( IS NOT NULL)
10 - access("ULTIMATE_PARENT_ID"=:B1)
11 - filter( IS NOT NULL)
13 - filter("AFFILIATED_ID"=:B1)
14 - filter( IS NOT NULL)
16 - filter("JOINT_VENTURE_ID"=:B1)
17 - filter( IS NOT NULL)
19 - filter("SUCCESSOR_ID"=:B1)
20 - filter( IS NOT NULL)
22 - access("CP_TAX_AUTHORITY_ID"=:B1)
23 - filter( IS NOT NULL)
25 - access("CP_PRIM_REGULATOR_ID"=:B1)
26 - filter( IS NOT NULL)
29 - filter("DUPLICATE_OF_ID"=:B1)
30 - filter( IS NOT NULL)
33 - filter("REG_AUTHORITY_ID"=:B1)It took just 2.20 seconds, but why does it causes more CPU resource ?
We are about to plugin new module in this database, hence ONS table is partitioned, its partitioned on column PROVIDER which seperates existing and new module in to different partitions which makes easier for loading wihout affecting existing module data(We also make about to load partition local indexes to unusable state). Also this table is the parent table for about 6 child tables. So we decided to partition even child tables by adding PROVIDER column to all child tables and partition on this column. Parent-Child relationship is built upon ID column in all the tables.
All the sql's will be altered to use PROVIDER column for filtering old and new module data.
Do you think we are in right approach, I would be thankful if you can help me here for precise designing of this table.
As a side thought - and one I would have to investigate - since you have declared a number of inddexes with "case insensitive sorting" - is is possible that you could work around this idea to drop a few of the existing indexes on "lower(column)" and use case-insensitive indexes for these comparisons ?Will test it in development database, but what is the performance improvement prediction? And please let me know your suspects which claims "lower(column)" should be avoided and use case-insensitive indexes.
Anyway we are implementing Text-Index on this table and drop all the unwanted indexes.
I've written a short note on my blog about the "exists subquery" and the varying cost of the tablescane linesI am regular reader of your blog, after seeing your test case i understood the concept crystal clear. Thanks a lot.... -
Oracle SQL Select query takes long time than expected.
Hi,
I am facing a problem in SQL select query statement. There is a long time taken in select query from the Database.
The query is as follows.
select /*+rule */ f1.id,f1.fdn,p1.attr_name,p1.attr_value from fdnmappingtable f1,parametertable p1 where p1.id = f1.id and ((f1.object_type ='ne_sub_type.780' )) and ( (f1.id in(select id from fdnmappingtable where fdn like '0=#1#/14=#S0058-3#/17=#S0058-3#/18=#1#/780=#5#%')))order by f1.id asc
This query is taking more than 4 seconds to get the results in a system where the DB is running for more than 1 month.
The same query is taking very few milliseconds (50-100ms) in a system where the DB is freshly installed and the data in the tables are same in both the systems.
Kindly advice what is going wrong??
Regards,
PurushothamSQL> @/alcatel/omc1/data/query.sql
2 ;
9 rows selected.
Execution Plan
Plan hash value: 3745571015
| Id | Operation | Name |
| 0 | SELECT STATEMENT | |
| 1 | SORT ORDER BY | |
| 2 | NESTED LOOPS | |
| 3 | NESTED LOOPS | |
| 4 | TABLE ACCESS FULL | PARAMETERTABLE |
|* 5 | TABLE ACCESS BY INDEX ROWID| FDNMAPPINGTABLE |
|* 6 | INDEX UNIQUE SCAN | PRIMARY_KY_FDNMAPPINGTABLE |
|* 7 | TABLE ACCESS BY INDEX ROWID | FDNMAPPINGTABLE |
|* 8 | INDEX UNIQUE SCAN | PRIMARY_KY_FDNMAPPINGTABLE |
Predicate Information (identified by operation id):
5 - filter("F1"."OBJECT_TYPE"='ne_sub_type.780')
6 - access("P1"."ID"="F1"."ID")
7 - filter("FDN" LIKE '0=#1#/14=#S0058-3#/17=#S0058-3#/18=#1#/780=#5#
8 - access("F1"."ID"="ID")
Note
- rule based optimizer used (consider using cbo)
Statistics
0 recursive calls
0 db block gets
0 consistent gets
0 physical reads
0 redo size
0 bytes sent via SQL*Net to client
0 bytes received via SQL*Net from client
0 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
9 rows processed
SQL> -
Dynamic SQL and Bulk Bind... Interesting Problem !!!
Hi Forum !!
I've got a very interesting problem involving Dynamic SQL and Bulk Bind. I really Hope you guys have some suggestions for me...
Table A contains a column named TX_FORMULA. There are many strings holding expressions like '.3 * 2 + 1.5' or '(3.4 + 2) / .3', all well formed numeric formulas. I want to calculate each formula, finding the number obtained as a result of each calculation.
I wrote something like this:
DECLARE
TYPE T_FormulasNum IS TABLE OF A.TX_FORMULA%TYPE
INDEX BY BINARY_INTEGER;
TYPE T_MontoIndicador IS TABLE OF A.MT_NUMBER%TYPE
INDEX BY BINARY_INTEGER;
V_FormulasNum T_FormulasNum;
V_MontoIndicador T_MontoIndicador;
BEGIN
SELECT DISTINCT CD_INDICADOR,
TX_FORMULA_NUMERICA
BULK COLLECT INTO V_CodIndicador, V_FormulasNum
FROM A;
FORALL i IN V_FormulasNum.FIRST..V_FormulasNum.LAST
EXECUTE IMMEDIATE
'BEGIN
:1 := TO_NUMBER(:2);
END;'
USING V_FormulasNum(i) RETURNING INTO V_MontoIndicador;
END;
But I'm getting the following messages:
ORA-06550: line 22, column 43:
PLS-00597: expression 'V_MONTOINDICADOR' in the INTO list is of wrong type
ORA-06550: line 18, column 5:
PL/SQL: Statement ignored
ORA-06550: line 18, column 5:
PLS-00435: DML statement without BULK In-BIND cannot be used inside FORALL
Any Idea to solve this problem ?
Thanks in Advance !!Hallo,
many many errors...
1. You can use FORALL only in DML operators, in your case you must use simple FOR LOOP.
2. You can use bind variables only in DML- Statements. In other statements you have to use literals (hard parsing).
3. RETURNING INTO - Clause in appropriate , use instead of OUT variable.
4. Remark: FOR I IN FIRST..LAST is not fully correct: if you haven't results, you get EXCEPTION NO_DATA_FOUND. Use Instead of 1..tab.count
This code works.
DECLARE
TYPE T_FormulasNum IS TABLE OF VARCHAR2(255)
INDEX BY BINARY_INTEGER;
TYPE T_MontoIndicador IS TABLE OF NUMBER
INDEX BY BINARY_INTEGER;
V_FormulasNum T_FormulasNum;
V_MontoIndicador T_MontoIndicador;
BEGIN
SELECT DISTINCT CD_INDICATOR,
TX_FORMULA_NUMERICA
BULK COLLECT INTO V_MontoIndicador, V_FormulasNum
FROM A;
FOR i IN 1..V_FormulasNum.count
LOOP
EXECUTE IMMEDIATE
'BEGIN
:v_motto := TO_NUMBER('||v_formulasnum(i)||');
END;'
USING OUT V_MontoIndicador(i);
dbms_output.put_line(v_montoindicador(i));
END LOOP;
END;You have to read more about bulk- binding and dynamic sql.
HTH
Regards
Dmytro
Test table
a
(cd_indicator number,
tx_formula_numerica VARCHAR2(255))
CD_INDICATOR TX_FORMULA_NUMERICA
2 (5+5)*2
1 2*3*4
Message was edited by:
Dmytro Dekhtyaryuk -
SQL query with Bind variable with slower execution plan
I have a 'normal' sql select-insert statement (not using bind variable) and it yields the following execution plan:-
Execution Plan
0 INSERT STATEMENT Optimizer=CHOOSE (Cost=7 Card=1 Bytes=148)
1 0 HASH JOIN (Cost=7 Card=1 Bytes=148)
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'TABLEA' (Cost=4 Card=1 Bytes=100)
3 2 INDEX (RANGE SCAN) OF 'TABLEA_IDX_2' (NON-UNIQUE) (Cost=3 Card=1)
4 1 INDEX (FAST FULL SCAN) OF 'TABLEB_IDX_003' (NON-UNIQUE)
(Cost=2 Card=135 Bytes=6480)
Statistics
0 recursive calls
18 db block gets
15558 consistent gets
47 physical reads
9896 redo size
423 bytes sent via SQL*Net to client
1095 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
55 rows processed
I have the same query but instead running using bind variable (I test it with both oracle form and SQL*plus), it takes considerably longer with a different execution plan:-
Execution Plan
0 INSERT STATEMENT Optimizer=CHOOSE (Cost=407 Card=1 Bytes=148)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'TABLEA' (Cost=3 Card=1 Bytes=100)
2 1 NESTED LOOPS (Cost=407 Card=1 Bytes=148)
3 2 INDEX (FAST FULL SCAN) OF TABLEB_IDX_003' (NON-UNIQUE) (Cost=2 Card=135 Bytes=6480)
4 2 INDEX (RANGE SCAN) OF 'TABLEA_IDX_2' (NON-UNIQUE) (Cost=2 Card=1)
Statistics
0 recursive calls
12 db block gets
3003199 consistent gets
54 physical reads
9448 redo size
423 bytes sent via SQL*Net to client
1258 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
55 rows processed
TABLEA has around 3million record while TABLEB has 300 records. Is there anyway I can improve the speed of the sql query with bind variable? I have DBA Access to the database
Regards
IvanMany thanks for your reply.
I have run the statistic already for the both tableA and tableB as well all the indexes associated with both table (using dbms_stats, I am on 9i db ) but not the indexed columns.
for table I use:-
begin
dbms_stats.gather_table_stats(ownname=> 'IVAN', tabname=> 'TABLEA', partname=> NULL);
end;
for index I use:-
begin
dbms_stats.gather_index_stats(ownname=> 'IVAN', indname=> 'TABLEB_IDX_003', partname=> NULL);
end;
Is it possible to show me a sample of how to collect statisc for INDEX columns stats?
regards
Ivan -
Hi,
Its amazing i could not able to understand that the results returned from my query
Here is my query,
SELECT * FROM TABLE1,TABLE2 WHRE TABLE1.ID = TABLE2.TABLE1_ID AND TABLE1.ID IN (5,4);
This query returns the records of Id 4 first and then for the Id 5.
I do no what happens, i need to use IN Operator in my Select query and i want to get the datas of the Id
as per the Order I give
How can i get the solutionBut in the query i can give any number of values like (7,2,1,4,8,..n)You can use global temporary table or collection to specify the order of
numbers. In this case you have to use the join instead of IN:
SQL> create global temporary table pos_tab (id number, position number);
Table created.
SQL> create table t_test as select rownum col from dict where rownum <=10;
Table created.
SQL> begin
2 insert into pos_tab values(7,1);
3 insert into pos_tab values(5,2);
4 insert into pos_tab values(9,3);
5 end;
6 /
PL/SQL procedure successfully completed.
SQL> select t_test.col from t_test, pos_tab where t_test.col = pos_tab.id
2 order by pos_tab.position;
COL
7
5
9Simple collection example:
SQL> create type pos_obj is object(id number, pos number);
2 /
Type created.
SQL> create type pos_t is table of pos_obj;
2 /
Type created.
SQL> select col from t_test, table(pos_t(pos_obj(7,1),pos_obj(5,2),pos_obj(9,3))) pos
2 where pos.id = t_test.col order by pos.pos
3 /
COL
7
5
9Rgds.
Consider - IN-list operators are limited by 1000 explicit elements.
Message was edited by:
dnikiforov -
SQL select query having more than 1000 values in 'IN' clause of predicate.
Hi,
We are executing a select query from a table and showing it through a front end screen. When the count of values given in the 'IN' clause of predicate are exceeding 1000 , it is throwing error.
eg. select * from Employees where emp.Id. in('111',123','121','3232',........1001 Ids)
We are using Oracle version 10.2.0.
Please suggest how to tackle such issue.
Regards,
Naveen Kumar.C.
Edited by: Naveen Kumar C on Aug 30, 2008 10:01 PMUse a nested table:
create or replace type numbertype
as object
(nr number(20,10) )
create or replace type number_table
as table of numbertype
create or replace procedure tableselect
( p_numbers in number_table
, p_ref_result out sys_refcursor)
is
begin
open p_ref_result for
select *
{noformat} from employees
, (select /*+ cardinality(tab 10) */ tab.nr
from table(p_numbers) tab) tbnrs
where id = tbnrs.nr;
end;
/{noformat}
Using nested tables will reduce the amount of parsing because the sql statement uses binded variables! The cardinality hint causes Oracle to use the index on employees.id. -
Using column number inplace of column name in SQL Select statement
Is there a way to run sql select statements with column numbers in
place of column names?
Current SQL
select AddressId,Name,City from AddressIs this possible
select 1,2,5 from AddressThanks in Advanceuser10962462 wrote:
well, ok, it's not possible with SQL, but how about PL/SQL?As mentioned, using DBMS_SQL you can only really use positional notation... and you can also use those positions to get the other information such as what the column is called, what it's datatype is etc.
CREATE OR REPLACE PROCEDURE run_query(p_sql IN VARCHAR2) IS
v_v_val VARCHAR2(4000);
v_n_val NUMBER;
v_d_val DATE;
v_ret NUMBER;
c NUMBER;
d NUMBER;
col_cnt INTEGER;
f BOOLEAN;
rec_tab DBMS_SQL.DESC_TAB;
col_num NUMBER;
v_rowcount NUMBER := 0;
BEGIN
-- create a cursor
c := DBMS_SQL.OPEN_CURSOR;
-- parse the SQL statement into the cursor
DBMS_SQL.PARSE(c, p_sql, DBMS_SQL.NATIVE);
-- execute the cursor
d := DBMS_SQL.EXECUTE(c);
-- Describe the columns returned by the SQL statement
DBMS_SQL.DESCRIBE_COLUMNS(c, col_cnt, rec_tab);
-- Bind local return variables to the various columns based on their types
FOR j in 1..col_cnt
LOOP
CASE rec_tab(j).col_type
WHEN 1 THEN DBMS_SQL.DEFINE_COLUMN(c,j,v_v_val,2000); -- Varchar2
WHEN 2 THEN DBMS_SQL.DEFINE_COLUMN(c,j,v_n_val); -- Number
WHEN 12 THEN DBMS_SQL.DEFINE_COLUMN(c,j,v_d_val); -- Date
ELSE
DBMS_SQL.DEFINE_COLUMN(c,j,v_v_val,2000); -- Any other type return as varchar2
END CASE;
END LOOP;
-- Display what columns are being returned...
DBMS_OUTPUT.PUT_LINE('-- Columns --');
FOR j in 1..col_cnt
LOOP
DBMS_OUTPUT.PUT_LINE(rec_tab(j).col_name||' - '||case rec_tab(j).col_type when 1 then 'VARCHAR2'
when 2 then 'NUMBER'
when 12 then 'DATE'
else 'Other' end);
END LOOP;
DBMS_OUTPUT.PUT_LINE('-------------');
-- This part outputs the DATA
LOOP
-- Fetch a row of data through the cursor
v_ret := DBMS_SQL.FETCH_ROWS(c);
-- Exit when no more rows
EXIT WHEN v_ret = 0;
v_rowcount := v_rowcount + 1;
DBMS_OUTPUT.PUT_LINE('Row: '||v_rowcount);
DBMS_OUTPUT.PUT_LINE('--------------');
-- Fetch the value of each column from the row
FOR j in 1..col_cnt
LOOP
-- Fetch each column into the correct data type based on the description of the column
CASE rec_tab(j).col_type
WHEN 1 THEN DBMS_SQL.COLUMN_VALUE(c,j,v_v_val);
DBMS_OUTPUT.PUT_LINE(rec_tab(j).col_name||' : '||v_v_val);
WHEN 2 THEN DBMS_SQL.COLUMN_VALUE(c,j,v_n_val);
DBMS_OUTPUT.PUT_LINE(rec_tab(j).col_name||' : '||v_n_val);
WHEN 12 THEN DBMS_SQL.COLUMN_VALUE(c,j,v_d_val);
DBMS_OUTPUT.PUT_LINE(rec_tab(j).col_name||' : '||to_char(v_d_val,'DD/MM/YYYY HH24:MI:SS'));
ELSE
DBMS_SQL.COLUMN_VALUE(c,j,v_v_val);
DBMS_OUTPUT.PUT_LINE(rec_tab(j).col_name||' : '||v_v_val);
END CASE;
END LOOP;
DBMS_OUTPUT.PUT_LINE('--------------');
END LOOP;
-- Close the cursor now we have finished with it
DBMS_SQL.CLOSE_CURSOR(c);
END;
SQL> exec run_query('select empno, ename, deptno, sal from emp where deptno = 10');
-- Columns --
EMPNO - NUMBER
ENAME - VARCHAR2
DEPTNO - NUMBER
SAL - NUMBER
Row: 1
EMPNO : 7782
ENAME : CLARK
DEPTNO : 10
SAL : 2450
Row: 2
EMPNO : 7839
ENAME : KING
DEPTNO : 10
SAL : 5000
Row: 3
EMPNO : 7934
ENAME : MILLER
DEPTNO : 10
SAL : 1300
PL/SQL procedure successfully completed.
SQL> exec run_query('select * from emp where deptno = 10');
-- Columns --
EMPNO - NUMBER
ENAME - VARCHAR2
JOB - VARCHAR2
MGR - NUMBER
HIREDATE - DATE
SAL - NUMBER
COMM - NUMBER
DEPTNO - NUMBER
Row: 1
EMPNO : 7782
ENAME : CLARK
JOB : MANAGER
MGR : 7839
HIREDATE : 09/06/1981 00:00:00
SAL : 2450
COMM :
DEPTNO : 10
Row: 2
EMPNO : 7839
ENAME : KING
JOB : PRESIDENT
MGR :
HIREDATE : 17/11/1981 00:00:00
SAL : 5000
COMM :
DEPTNO : 10
Row: 3
EMPNO : 7934
ENAME : MILLER
JOB : CLERK
MGR : 7782
HIREDATE : 23/01/1982 00:00:00
SAL : 1300
COMM :
DEPTNO : 10
PL/SQL procedure successfully completed.
SQL> exec run_query('select * from dept where deptno = 10');
-- Columns --
DEPTNO - NUMBER
DNAME - VARCHAR2
LOC - VARCHAR2
Row: 1
DEPTNO : 10
DNAME : ACCOUNTING
LOC : NEW YORK
PL/SQL procedure successfully completed.
SQL> -
How to run a sql statement with bind variable in a test environment
Hello,
I have a sql statement in prod that I like to run in test. I got the sql statement from statspack, but the statement has bind variables. How do I get this statement to run in test? Thank you.Hi,
If you have the SQL statement and all the referenced objects are available in your test env then what is the problem to run it?
If I am not wront to get your reqmnt...
i.e
SQL> select * from emp
where emp_no = &empno
and dept_code = &deptcode;
Thanks -
I use LV7.1.1 to read/update an Access Database, using DB Toolset.
When I use the Sql 'SELECT' query with LIKE operator, I do not get any results. (no error is returned).
The statement is:
SELECT Machine, System, AD, Query_Position, RefDesignator, Description
FROM AllLevel
WHERE ((Machine LIKE '*') AND (System LIKE '*') AND (AD ='2000CV20012') AND (Query_Position ='AD') AND (RefDesignator='u1'))
It seems that the LIKE '*' is not translated well by the 'Microsoft Jet 4.0 OLE DB Provider'. I must mention that the statement works just fine under Access, not using LV.
Is there any alternative format for LIKE '*' that do the job?
Regards,
Joseph MosheMaybe Access has different syntax, but for T-SQL, the proper syntax for LIKE is to use the % character to match any string of zero or more characters and the _ character to match a single characer. In fact, the documentation says that while * and ? are understood,
"The % and _ (underscore) wildcard characters should be used only
through the Jet OLE DB provider and ActiveX® Data Objects (ADO) code.
They will be treated as literal characters if they are used though the
Access SQL View user interface or Data Access Objects (DAO) code." -
Hello,
When I try to verify the prerequisites to upgrade my SCOM 2012 UR2 Platform to SP1 Beta, I have these errors :
The installed version of SQL Server is not supported for the operational database.
The installed version of SQL Server is not supported for the data warehouse.
But when I execute this query Select @@version on my MSSQL Instance, the result is :
Microsoft SQL Server 2008 R2 (SP1) - 10.50.2500.0 (X64) Jun 17 2011 00:54:03 Copyright (c) Microsoft Corporation Standard Edition (64-bit) on Windows NT 6.1 <X64> (Build 7600: ) (Hypervisor)
But
here, we can see that :
Microsoft SQL Server: SQL Server SQL 2008 R2 SP1, SQL Server 2008 R2 SP2, SQL Server 2012, SQL Server 2012 SP1, are supported.
Do I need to pach my MSSQL Server with a specific cumulative update package ?
Thanks.These are the requirements for your SQL:
SQL Server 2008 and SQL Server 2012 are available in both Standard and Enterprise editions. Operations Manager will function with both editions.
Operations Manager does not support hosting its databases or SQL Server Reporting Services on a 32-bit edition of SQL Server.
Using a different version of SQL Server for different Operations Manager features is not supported. The same version should be used for all features.
SQL Server collation settings for all databases must be one of the following: SQL_Latin1_General_CP1_CI_AS, French_CI_AS, Cyrillic_General_CI_AS, Chinese_PRC_CI_AS, Japanese_CI_AS, Traditional_Spanish_CI_AS, or Latin1_General_CI_AS. No other collation
settings are supported.
The SQL Server Agent service must be started, and the startup type must be set to automatic.
Side-by-side installation of System Center Operations Manager 2007 R2 reporting and System Center 2012 Service Pack 1 (SP1), Operations Manager reporting on the same server is not supported.
The db_owner role for the operational database must be a domain account. If you set the SQL Server Authentication to Mixed mode, and then try to add a local SQL Server login on the operational database, the Data Access service will not be able to start.
For information about how to resolve the issue, see
System Center Data Access Service Start Up Failure Due to SQL Configuration Change
If you plan to use the Network Monitoring features of System Center 2012 – Operations Manager, you should move the tempdb database to a separate disk that has multiple spindles. For more information, see
tempdb Database.
http://technet.microsoft.com/en-us/library/jj656654.aspx#BKMK_RBF_OperationsDatabase
Check the SQL server agent service and see whether it is set to automatic AND started. This got me confused at my first SP1 install as well. This is not done by default...
It's doing common things uncommonly well that brings succes. -
Blocking Session - blocked SQL - SELECT SYSDATE FROM SYS.DUAL
Oracle 10.0.0.4g
When database execute some big and long queries/operations my system is slow and some users wait, can’t work (they work with some Oracle forms applications ) because I often have blocking session.
I found up that this blocking sessions block only this query of another user:
SELECT SYSDATE FROM SYS.DUAL
Or:
+10-АВГ-2009 08:51:10 User X1 ( SID= 222 ) with the statement: SELECT ... is blocking the SQL statement on Y1 ( SID=333 ) blocked SQL -> SELECT SYSDATE FROM SYS.DUAL+
When I kill one of the blocking session another session take his place and do the same:
+10-АВГ-2009 08:53:10 User X2 ( SID= 444 ) with the statement: SELECT ... is blocking the SQL statement on Y2 ( SID=555 ) blocked SQL -> SELECT SYSDATE FROM SYS.DUAL+
When long queries finished everything is OK.
Please Help Me!!!I create ASH report:_
Top User Events
Avg Active
Event Event Class % Activity Sessions
enq: TM - contention Application 55.87 0.96
db file sequential read User I/O 18.87 0.32
CPU + Wait for CPU CPU 16.33 0.28
db file scattered read User I/O 3.02 0.05
Top Event P1/P2/P3 Values
Event % Event P1 Value, P2 Value, P3 Value % Activity
Parameter 1 Parameter 2 Parameter 3
enq: TM - contention 55.87 "xxxxxxxxxxxxxxxxxxxx" 38.35
name|mode object # table/partition
"1111111111","xxxxxxx","0" 17.44
db file sequential read 19.21 "xxxxxxxxxxxxxxx’’ 0.00
file# block# blocks
db file scattered read 3.03 "xxxxxxxxxxxxxxxxxxxxxx’’ 0.01
file# block# blocks
Top SQL Statements …………..
SQL ID Planhash % Activity Event % Event
fnxxxxxxxxx N/A 25.09 enq: TM - contention 23.47
** SQL Text Not Available **
N/A 25.09 db file sequential read 1.19
** SQL Text Not Available **
byxxxxxxxxxxxxx 1111111 10.11 enq: TM - contention 7.43
SELECT SYSDATE FROM SYS.DUAL
db file sequential read 2.10
fnxxxxxxxxx 11111111111 2.57 enq: TM - contention 2.16
** SQL Text Not Available **
Top DB Objects
Object ID % Activity Event % Event
Object Name (Type) Tablespace
11111 10.33 enq: TM - contention 10.30
XXXXXXXXXXXXXXXXXXXXXXXX (INDEX) CC
99999 10.18 enq: TM - contention 10.16
XXXXXXXXXXXXXXXXXXXXXXXXX (INDEX) IND
933333 6.67 enq: TM - contention 6.55
FFFFFFFFFFFFFFFF (TABLE) T3
114545 3.88 enq: TM - contention 3.85
RRRRRRRRRRRRRRRRRRRRRR (INDEX) JJJ
1136664 2.96 enq: TM - contention 2.93
FFFFFFFFFFFFFFFFFFFFFFFFF (INDEX) G
How to found sql text that is not available ** SQL Text Not Available **?
What to do whit this Top DB Objects that have enq: TM - contention event?
And how to solve this problem? -
Hi ,
I have configued DB Adapter to Pure SQL Select, passing input parameter CART_KEY to select line items. There 10 line items for accart key. In my case there are 10 different records for CART_KEY = 607340. it is selecting 10 records but same record of 10 times(not unique).
I am not sure, is it caching the record ? do i need to turn caching off ?
I have excuted the query from logs, it is selecting 10 rows.
<2006-07-11 10:46:26,009> <DEBUG> <default.collaxa.cube.ws> <Database Adapter::Outbound> <oracle.tip.adapter.db.TopLinkLogger log> SELECT ACTIVE_FLAG_H, ACTIVE_FLAG_L, ANNUAL_FLAG, CART_LINE_KEY FROM CMGT.VW_QMT_TO_OTM_SPR_DATA WHERE (CART_KEY = ?)
bind => [607340]<bump>
No one has a clue about how to make this work?
Maybe you are looking for
-
Automatic Parallelism causes Merge statement to take longer.
We have a problem in a new project as part of the ETL load into the Oracle datawarehouse we perform a merge statement to update rows in a global temporary table then load the results into a permanant table, when testing with automatic parallel execut
-
How do I add a "normal" space?
Given a text location, I'd like to use the Doc object's addtext method to a "normal" space. I say normal space because I can't actually figure out what type of space FrameMaker is using by default and I haven't been able to find any documentation abo
-
What are the main differences between ODI 10g and 11g. Edited by: 957852 on Feb 1, 2013 11:32 PM
-
Typing notes into the "Note" portion of my Iphone
I was so disappointed to find out that the Notes on the Iphone do not sync with the "Notes" section on Microsoft Outlook. The only way to get a note on the Iphone is to type it using the Iphone keypad. I am not very adept at using the keypad and wish
-
RW-20019: No install action have been found
i m using windows server 2008 enterprise(64bit) at both end, i successfully installed DB in 1st node i.e. DB Server, also copy the conf_SID.txt file to 2nd node i.e. Application server, i browse the config file that i hav copied from DB Node but it g