Using a UDF in query select
I am using CF8 and MS Sql Server. I am getting an error on "select Str, CapFirst(Str) ..." when trying the following:
<cffunction name="CapFirst" returntype="string" output="false">... </cffunction>
<cfquery name="query" dbtype="query">
select Str, CapFirst(Str) as CapStr from SampleTable
</cfquery>
Thanks for your help
You can't run a UDF like that in any <cfquery> SQL string, be it a QoQ or any other DB's driver.
If one wants to use a UDF in the SQL string, then one needs to call it like one would anywhere else: #myFunctionHere()#. However this won't help you because all the CF calls are resolved before the resultant string is sent to the DB driver.
And the string that goes to the driver needs to be valid SQL.
So you cannot do what you're wanting to do via this approach.
I presume the original data is coming from a DB? Why don't you run an equivalent function on the DB before passing it back to CF, so the results are already how you want them to be?
Adam
Similar Messages
-
XSQL bug when using CURSOR in xsql:query SELECT statement?
Hi there,
When I tested with different XSQL pages, I found out that if I
did not involve any XSQL pages that contain "CURSOR", I received
data correctly and when I shut down Tomcat, Oracle DB server did
NOT create any dump file (???). However, as long as I involve a
XSQL page which contains "CURSOR", even I received data
correctly, but when I shut down my Tomcat, Oracle DB server
created a dump file (???).
for example, if I involve xsql:query like:
<xsql:query>
SELECT emp_name,
emp_id
CURSOR( SELECT emp_address
from address a
where a.emp_id = b.emp_id)
FROM employee b
</xsql:query>
Once, I involve this xsql page, when I shut down Tomcat, Oracle
dB will create a dump file on the server.
Even when I run this xsql page from
oracle.xml.xsqlXSQLCommandLine, Oracle dB server still create a
dump file on the server.
Any idea for help ?
Thanks,Hi,
Is this what you are trying:
try {
Statement *stmt = conn->createStatement("SELECT ename AS aaaaaaaaaaaaaaa FROM emp");
ResultSet *rs = stmt->executeQuery ();
vector<MetaData> md = rs->getColumnListMetaData ();
int numCols = md.size ();
cout<< "Number of columns :" << numCols << endl;
string *colName = new string [numCols];
int type = (int ) malloc (numCols);
for (int i = 0; i < numCols; i++ ) {
int ptype = md [ i ].getInt (MetaData::ATTR_PTYPE);
if ( ptype == MetaData::PTYPE_COL ) {
colName[ i ] = md[ i ].getString (MetaData::ATTR_NAME);
cout<<"Column Name :" << colName[ i ] << endl;
delete[] colName;
stmt->closeResultSet (rs);
conn->terminateStatement (stmt);
catch (SQLException &ex) {
cout<<ex.getMessage()<<endl;
}The above snippet works correctly for me.
Rgds.
Amogh -
How use insert more than query (select)
insert into FISH_FAMILIES (id,code ,SCIENCE_NAME,ARABIC_NAME,LATIN_NAME,IS_FAMILY)
SELECT ROWNUM,to_char(ROWNUM) FROM dual CONNECT BY LEVEL <= 43,
select distinct FAMILY, ARABIC_DESCRIPTION,LATIN_DESCRIPTION from FISH_SPECIES ,
select distinct IS_FAMILY from FISH_SPECIES
iam dont know format insert use
More than select ??????ali_alkomi wrote:
ORA-01427: single-row subquery returns more than one row
insert into FISH_FAMILIES(SCIENCE_NAME,ARABIC_NAME,LATIN_NAME,IS_FAMILY)
values ( (select distinct FAMILY from FISH_SPECIES) ,(select ARABIC_DESCRIPTION from FISH_SPECIES) ,(select LATIN_DESCRIPTION from FISH_SPECIES),(select distinct IS_FAMILY from FISH_SPECIES
))You were given an answer to that over on your other question:
How Generating Series Of Numbers Without use Sequences
If you are haivng a problem understanding the answers, do not start a new thread, otherwise the same people who were helping you originally may not know you are still having a problem.
To insert multiple rows of data you cannot use the VALUES clause of the INSERT statement (as it tells you in the documentation if you read it).
You need:
INSERT INTO ...
SELECT ... FROM ... UNION ALL
SELECT ... FROM ... UNION ALL
SELECT ... FROM ... etc.or, in your case you would need something more simply like
insert into FISH_FAMILIES(SCIENCE_NAME,ARABIC_NAME,LATIN_NAME,IS_FAMILY)
select fs.family, fs.arabic_description, fs.latin_description, fs.is_family
from fish_species fs -
Query selection filter by using wildcards
Dear all,
we have SAP BI 7 and we would like to increase usability of the end-user. We would like to use wildcard ('*') in the variable selection field of a reporting query to make the search/filter easier. Therefore, the search should support at least wildcards or even better the functionalities of R/3 (give results which are similar than to the searched word).
An example:
A report shows sales volume per customer and country. The user has to select in the variable selection the customer before the report is showing. As there are thousands of customer listed, the user should be able to use wildcards to list only relevant/wanted (e.g. 'ak*' and should get all customers containing any string like AK, ak, or other variants --> Independent if lower or upper case has been entered in the filter search).
Question:
1) How can you enable this wildcard filter function?
2) Is there any documentation/example available what can be entered and how the result is showing
Thanks for your support!!!
Edited by: Markus Reith on Jan 7, 2008 3:34 PMHello,
When u create a variable on infoobject for selection, select selection option from drop down box of details tab. Now when u execute the report it will give you a box before the input box, select * in that and the wildcard function should work. Just try it out.
Regds,
Shashank -
Trouble using pipelined function in an select list lov query
I'm trying to use a pipelined function in a select list lov query but i cet the error
"LOV query is invalid, a display and a return value are needed, the column names need to be different. If your query contains an in-line query, the first FROM clause in the SQL statement must not belong to the in-line query."
my query is as follows :
SELECT gt.navn d, gt.GEOGRAPHY_TYPE_ID r
FROM GEOGRAPHY_TYPE gt
WHERE gt.kode NOT IN (1)
and gt.kode in (select lov_value from table(RAPPORT_FILTER_PKG.GET_RAPPORT_FILTER_VALUE_PIP (
SYS_CONTEXT ('rapport_filter_ctx','filter_id'),'GEOGRAPHY_TYPES')) )
ORDER BY gt.navn DESC
if i use a discrete values '80' instead of the call to
SYS_CONTEXT ('rapport_filter_ctx','filter_id')
i don't get eny errors, but then the LOV isn't as dynamic as i has to be
any idears???
Edited by: [email protected] on Dec 1, 2008 8:50 AM
Edited by: [email protected] on Dec 1, 2008 11:17 AMnope that doesn't do it either
contains a syntax errror at
SYS_CONTEXT (('rapport_filter_ctx',:P500_RAPPORT_FILTER_ID),'GEOGRAPHY_TYPES'))
my theory is that it's got something to do with the way APEX binds values because
the query
SELECT gt.navn d, gt.GEOGRAPHY_TYPE_ID r
FROM GEOGRAPHY_TYPE gt
WHERE gt.kode NOT IN (1)
and gt.kode in (select r from table(RAPPORT_FILTER_PKG.GET_RAPPORT_FILTER_VALUE_PIP ('80','GEOGRAPHY_TYPES')) )
ORDER BY gt.navn DESC
works fine in both TOAD and in APEX but as soon as i replace th '80' with :P500_RAPPORT_FILTER_ID then, apex won't accept the code??????
Edited by: [email protected] on Dec 3, 2008 7:54 AM -
Duplicate entries missing using for all entries in select query.
Hi Gurus,
Is there any way to avoid missing duplicate entries in an internal table if you use for all entries in select statement?
Note : i am selecting two tables using non key fields and i have to aggregate the data. I want only 2 data fields and one amount field in my final internal table. I can add all the primary key fields into my internal table and collect my required fields in another table, but I just want to know is there any other way to avoid missing duplicate entries without adding all the key fields?
Regards,
RaghavendraHi,
Just check what are the other possible fields in the table which may be having
duplicate entries and make use of them in the selection accordingly.
You may not miss any entries unless there is any restriction on them.
You can better judge that in debugging mode while selecting data from that table. -
How to make sql to use index/make to query to perform better
Hi,
I have 2 sql query which results the same.
But both has difference in SQL trace.
create table test_table
(u_id number(10),
u_no number(4),
s_id number(10),
s_no number(4),
o_id number(10),
o_no number(4),
constraint pk_test primary key(u_id, u_no));
insert into test_table(u_id, u_no, s_id, s_no, o_id, o_no)
values (2007030301, 1, 1001, 1, 2001, 1);
insert into test_table(u_id, u_no, s_id, s_no, o_id, o_no)
values (2007030302, 1, 1001, 1, 2001, 2);
insert into test_table(u_id, u_no, s_id, s_no, o_id, o_no)
values (2007030303, 1, 1001, 1, 2001, 3);
insert into test_table(u_id, u_no, s_id, s_no, o_id, o_no)
values (2007030304, 1, 1001, 1, 2001, 4);
insert into test_table(u_id, u_no, s_id, s_no, o_id, o_no)
values (2007030305, 1, 1002, 1, 1001, 2);
insert into test_table(u_id, u_no, s_id, s_no, o_id, o_no)
values (2007030306, 1, 1002, 1, 1002, 1);
commit;
CREATE INDEX idx_test_s_id ON test_table(s_id, s_no);
set autotrace on
select s_id, s_no, o_id, o_no
from test_table
where s_id <> o_id
and s_no <> o_no
union all
select o_id, o_no, s_id, s_no
from test_table
where s_id <> o_id
and s_no <> o_no;
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=6 Card=8 Bytes=416)
1 0 UNION-ALL
2 1 TABLE ACCESS (FULL) OF 'TEST_TABLE' (TABLE) (Cost=3 Card=4 Bytes=208)
3 1 TABLE ACCESS (FULL) OF 'TEST_TABLE' (TABLE) (Cost=3 Card=4 Bytes=208)
Statistics
223 recursive calls
2 db block gets
84 consistent gets
0 physical reads
0 redo size
701 bytes sent via SQL*Net to client
508 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
5 sorts (memory)
0 sorts (disk)
8 rows processed
-- i didnt understand why the above query is not using the index idx_test_s_id.
-- But still it is faster
select s_id, s_no, o_id, o_no
from test_table
where (u_id, u_no) in
(select u_id, u_no from test_table
minus
select u_id, u_no from test_table
where s_id = o_id
or s_no = o_no)
union all
select o_id, o_no, s_id, s_no
from test_table
where (u_id, u_no) in
(select u_id, u_no from test_table
minus
select u_id, u_no from test_table
where s_id = o_id
or s_no = o_no);
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=16 Card=2 Bytes=156)
1 0 UNION-ALL
2 1 FILTER
3 2 TABLE ACCESS (FULL) OF 'TEST_TABLE' (TABLE) (Cost=3 Card=6 Bytes=468)
4 2 MINUS
5 4 INDEX (UNIQUE SCAN) OF 'PK_TEST' (INDEX (UNIQUE)) (Cost=1 Card=1 Bytes=26)
6 4 TABLE ACCESS (BY INDEX ROWID) OF 'TEST_TABLE' (TABLE) (Cost=2 Card=1 Bytes=78)
7 6 INDEX (UNIQUE SCAN) OF 'PK_TEST' (INDEX (UNIQUE)) (Cost=1 Card=1)
8 1 FILTER
9 8 TABLE ACCESS (FULL) OF 'TEST_TABLE' (TABLE) (Cost=3 Card=6 Bytes=468)
10 8 MINUS
11 10 INDEX (UNIQUE SCAN) OF 'PK_TEST' (INDEX (UNIQUE)) (Cost=1 Card=1 Bytes=26)
12 10 TABLE ACCESS (BY INDEX ROWID) OF 'TEST_TABLE' (TABLE) (Cost=2 Card=1 Bytes=78)
13 12 INDEX (UNIQUE SCAN) OF 'PK_TEST' (INDEX (UNIQUE)) (Cost=1 Card=1)
Statistics
53 recursive calls
8 db block gets
187 consistent gets
0 physical reads
0 redo size
701 bytes sent via SQL*Net to client
508 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
4 sorts (memory)
0 sorts (disk)
8 rows processed
-- The above query is using index PK_TEST. But still it has FULL SCAN to the
-- table two times it has the more cost.
1st query --> SELECT STATEMENT Optimizer=ALL_ROWS (Cost=6 Card=8 Bytes=416)
2nd query --> SELECT STATEMENT Optimizer=ALL_ROWS (Cost=16 Card=2 Bytes=156)
My queries are:
1) performance wise which query is better?
2) how do i make the 1st query to use an index
3) is there any other method to get the same result by using any index
Appreciate your immediate help.
Best regards
MuthuHi William
Nice...it works.. I have added "o_id" and "o_no" are in part of the index
and now the query uses the index
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=2 Card=8 Bytes=416)
1 0 UNION-ALL
2 1 INDEX (FULL SCAN) OF 'IDX_TEST_S_ID' (INDEX) (Cost=1 Card=4 Bytes=208)
3 1 INDEX (FULL SCAN) OF 'IDX_TEST_S_ID' (INDEX) (Cost=1 Card=4 Bytes=208)
Statistics
7 recursive calls
0 db block gets
21 consistent gets
0 physical reads
0 redo size
701 bytes sent via SQL*Net to client
507 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
8 rows processed
But my questions are:
1) In a where clause, if "<>" condition is used, then, whether the system will use the index. Because I have observed in several situations even though the column in where clause is indexed, since the where condition is "like" or "is null/is not null"
then the index is not used. Same as like this, i assumed, if we use <> then indexes will not be used. Is it true?
2) Now, after adding "o_id" and "o_no" columns to the index, the Execution plan is:
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=2 Card=8 Bytes=416)
1 0 UNION-ALL
2 1 INDEX (FULL SCAN) OF 'IDX_TEST_S_ID' (INDEX) (Cost=1 Card=4 Bytes=208)
3 1 INDEX (FULL SCAN) OF 'IDX_TEST_S_ID' (INDEX) (Cost=1 Card=4 Bytes=208)
Before it was :
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=6 Card=8 Bytes=416)
1 0 UNION-ALL
2 1 TABLE ACCESS (FULL) OF 'TEST_TABLE' (TABLE) (Cost=3 Card=4 Bytes=208)
3 1 TABLE ACCESS (FULL) OF 'TEST_TABLE' (TABLE) (Cost=3 Card=4 Bytes=208)
Difference only in Cost (reduced), not in Card, Bytes.
Can you explain, how can i decide which makes the performace better (Cost / Card / Bytes). Full Scan / Range Scan?
On statistics also:
Before:
Statistics
52 recursive calls
0 db block gets
43 consistent gets
0 physical reads
0 redo size
701 bytes sent via SQL*Net to client
507 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
8 rows processed
After:
Statistics
7 recursive calls
0 db block gets
21 consistent gets
0 physical reads
0 redo size
701 bytes sent via SQL*Net to client
507 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
8 rows processed
Difference in recursive calls & consistent gets.
Which one shows the query with better performance?
Please explain..
Regards
Muthu -
Powershell use Connection String to query Database and write to Excel
Right now I have a powershell script that uses ODBC to query SQL Server 2008 / 2012 database and write to EXCEL
$excel = New-Object -Com Excel.Application
$excel.Visible = $True
$wb = $Excel.Workbooks.Add()
$ws = $wb.Worksheets.Item(1)
$ws.name = "GUP Download Activity"
$qt = $ws.QueryTables.Add("ODBC;DSN=$DSN;UID=$username;PWD=$password", $ws.Range("A1"), $SQL_Statement)
if ($qt.Refresh()){
$ws.Activate()
$ws.Select()
$excel.Rows.Item(1).HorizontalAlignment = $xlCenter
$excel.Rows.Item(1).VerticalAlignment = $xlTop
$excel.Rows.Item("1:1").Font.Name = "Calibri"
$excel.Rows.Item("1:1").Font.Size = 11
$excel.Rows.Item("1:1").Font.Bold = $true
$filename = "D:\Script\Reports\Status_$a.xlsx"
if (test-path $filename ) { rm $filename }
$wb.SaveAs($filename, $xlOpenXMLWorkbook) #save as an XML Workbook (xslx)
$wb.Saved = $True #flag it as being saved
$wb.Close() #close the document
$Excel.Quit() #and the instance of Excel
$wb = $Null #set all variables that point to Excel objects to null
$ws = $Null #makes sure Excel deflates
$Excel=$Null #let the air out
I would like to use connection string to query the database and write results to EXCEL, i.e.
$SQL_Statement = "SELECT ..."
$conn = New-Object System.Data.SqlClient.SqlConnection
$conn.ConnectionString = "Server=10.10.10.10;Initial Catalog=mydatabase;User Id=$username;Password=$password;"
$conn.Open()
$cmd = New-Object System.Data.SqlClient.SqlCommand($SQL_Statement,$conn)
do{
try{
$rdr = $cmd.ExecuteReader()
while ($rdr.read()){
$sql_output += ,@($rdr.GetValue(0), $rdr.GetValue(1))
$transactionComplete = $true
catch{
$transactionComplete = $false
}until ($transactionComplete)
$conn.Close()
How would I read the columns and data for $sql_output into an EXCEL worksheet. Where do I find these tutorials?Hi Q.P.Waverly,
If you mean to export the data in $sql_output to excel document, please try to format the output with psobject:
$sql_output=@()
do{
try{
$rdr = $cmd.ExecuteReader()
while ($rdr.read()){
$sql_output+=New-Object PSObject -Property @{data1 = $rdr.GetValue(0);data2 =$rdr.GetValue(1)}
$transactionComplete = $true
catch{
$transactionComplete = $false
}until ($transactionComplete)
$conn.Close()
Then please try to use the cmdlet "Export-Csv" to export the data to excel like:
$sql_output | Export-Csv d:\data.csv
Or you can export to worksheet like:
$excel = New-Object -ComObject Excel.Application
$excel.Visible = $true
$workbook = $excel.Workbooks.Add()
$sheet = $workbook.ActiveSheet
$counter = 0
$sql_output | ForEach-Object {
$counter++
$sheet.cells.Item($counter,1) = $_.data1$sheet.cells.Item($counter,2) = $_.data2}
Refer to:
PowerShell and Excel: Fast, Safe, and Reliable
If there is anything else regarding this issue, please feel free to post back.
Best Regards,
Anna Wang -
When using TODATE function MDX query is not correctly generated
Essbase 9.3.1.2 and OBIEE 10.1.3.4.1.
When using TODATE function MDX query is not correctly generated.
This leads to unexpected values not only on cumulative columns in report (generated with TODATE), but also other columns (calculated with AGO function or directly read from cube) have incorrect values.
The problem occurs when you filter on a column that is not in the select list. If you filter on just one level of dimension, results are fine. You can filter on multiple dimensions as long as you filter on just one level of each dimension.
If you filter on two or more levels of one dimension, than results are not correct. In some cases results for TODATE column are all zeros, in some cases it is a random value returned by Essbase (same random value for all rows of that column), and in some cases BI Server returns an error:
State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. Essbase Error: Network error [10054]: Cannot Send Data (HY000).
Here is generated MDX code:
With
set [Grupe proizvoda2] as '{[Grupe proizvoda].[N4]}'
set [Grupe proizvoda4] as 'Generate([Grupe proizvoda2], Descendants([Grupe proizvoda].currentmember, [Grupe proizvoda].Generations(4), leaves))'
set [Segmentacija2] as '{[Segmentacija].[RETAIL]}'
set [Segmentacija4] as 'Filter(Generate({[Segmentacija2]}, Descendants([Segmentacija].currentmember, [Segmentacija].Generations(4),SELF), ALL), ([Segmentacija].CurrentMember IS [Segmentacija].[AFFLUENT]))'
set [Vrijeme3] as '{[Vrijeme].[MJESEC_4_2009]}'
member [Segmentacija].[SegmentacijaCustomGroup]as 'Sum([Segmentacija4])', SOLVE_ORDER = AGGREGATION_SOLVEORDER
member [Accounts].[MS1] as '(ParallelPeriod([Vrijeme].[Gen3,Vrijeme],2,[Vrijeme].currentmember), [Accounts].[Trosak kapitala])'
member [Accounts].[MS2] as '(ParallelPeriod([Vrijeme].[Gen3,Vrijeme],1,[Vrijeme].currentmember), [Accounts].[Trosak kapitala])'
member [Accounts].[MS3] as 'AGGREGATE({PeriodsToDate([Vrijeme].[Gen2,Vrijeme],[Vrijeme].currentmember)}, [Accounts].[Trosak kapitala])'
select
{ [Accounts].[Trosak kapitala],
[Accounts].[MS1],
[Accounts].[MS2],
[Accounts].[MS3]
} on columns,
NON EMPTY {crossjoin ({[Grupe proizvoda4]},{[Vrijeme3]})} properties ANCESTOR_NAMES, GEN_NUMBER on rows
from [NISE.NISE]
where ([Segmentacija].[SegmentacijaCustomGroup])
If you remove part with TODATE function, the results are fine. If you leave TODATE function, OBIEE returns an error mentioned above. If you manually modify variable SOLVE_ORDER and set value to, for example, 100 instead of AGGREGATION_SOLVEORDER, results are OK.
In all cases when this variable was modified in generated MDX, and query manually executed on Essabse, results were OK. This variable seems to be the possible problem.Hi,
Version is
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi
PL/SQL Release 10.2.0.5.0 - Production
CORE 10.2.0.5.0 Production
TNS for 64-bit Windows: Version 10.2.0.5.0 - Production
NLSRTL Version 10.2.0.5.0 - Production
Sorry, in my last post i forgot to mention that i already created a function based index but still it is not using because, there is a UNIQUE constraint on that column.
Thanks -
How can I remove ASCII text from a field when I use it in a query
How can I remove ASCII text from a field when I use it in a query?
I am running a select statement on a table that appears to have ASCII text in some of the fields. If I use these fields in the where statement like the code below nothing returns:
SELECT FIELD1 FROM TABLE1 WHERE FIELD1 IS NULL
But the field looks empty if I do a straight select without the where clause. Additionally, one of the fields has text but appears to be padded out with ASCII text, which I need to strip out before I can use this field in a where or join statement. I have tried using a trim, ltrim, rtrim, to_char, nvl, decode and nothing works. When I use excel to run the same query it looks as if these ASCII fields are boxes.
I have asked our DBA team to see what they can do to prevent these from going into the table, but in the mean time I still need to run this report.
Do you have any suggestions?Can you provide an example? I've been trying (for
example) "select translate(' test one', ascii(' '),
'X') from dual" with no luck.
Thank you.To replace space, you should query like this:
select translate(' test one', chr(32), 'X') from dual instead of select translate(' test one', ascii(' '), 'X') from dual Thanks,
Dharmesh Patel -
Getting the MDX query select error when running a webi report on BI query
Getting the following error when running a webi report on BI query :
A database error occured. The database error text is: The MDX query SELECT { [Measures].[D8JBFK099LLUVNLO7JY49FJKU] } ON COLUMNS , NON EMPTY [ZCOMPCODE].[LEVEL01].MEMBERS ON ROWS FROM [ZTEST_CUB/REP_20100723200521] failed to execute with the error Unknown error. (WIS 10901).
I have gone through many threads related to this error. But not able find the steps to follow for resoultion.
Please help in this regard.
Thanks,
JeethenderThe Fix Pack is also for Client Tools--it is a separate download. Please see the text below for ADAPT01255422
ADAPT01255422
Description:
Web Intelligence generates an incorrect MDX statement when a characteristic and a prompt are used.
The following database error happens: "The MDX query ... failed to execute with the error
Unknown error (WIS 10901)."
New Behavior:
This problem is resolved.
This information is also available in the Fixed Issues document for any Fix Pack greater than 2.2. -
Use a manual SQL query in an interface
Hello,
I would like to know if I can use a manual SQL query in an interface.
Here is what I need.
I have two tables.
T1 with 4 fields :
idT1, LibC, val, lib_val
An example of a line from T1
1, field1, 33 , value 33
2, field2, 44 , value 44
And table T2 with fields such as:
idT2, ... , field1, field2
There is no key to join T1 and T1, but I should retrieve the value of field lin_val from T2 which corresponds to the value of field1.
In SQL, the query looks like this:
SELECT t2.lib_val
FROM t2 , t1
WHERE T2.LibC = "Column_name"
AND T2.val = T1.Column_name
AND t1.idT1 = xyzYou should go for yellow interface.It will solve your problem. Here you go
http://odiexperts.com/how-to-create-a-temp-table-in-odi-interface/
https://blogs.oracle.com/warehousebuilder/entry/odi_11g_simple_flexible_powerful
Thanks. -
Sorting Currency conversion Variables in Query Selection screen
Hi Experts,
I have an issue in sorting variables.
A Currency translation is used here which has two selection options here, one for "Currency" and one for "Currency conversion date". They just pop in somewhere in the selection screen with out order. I want to put them together either in the middle or at the bottom.
They are not visible in the Variable sequence tab of query properties.
I changed the text for each of them (Ex. "XCurrency"), so that it sorts alphabetically, but no difference.
Had anybody come accross this issue before.
regards
G RaiHi,
This seems strange that the variable does not show in the query properties. One thing I can suggest it to try changing the technical name/desciption of the variable if that changes the order.
Please update back the thread if that helps. -
How to use LIKE keyword in a select statement
Hi:
I want to use the following sql statement,
String str = "xyz";
String query="Select * from tab where col like '"+str+"' '%' ";
Kindly suggest
TIA% is a wildcard in the like clause, so:.
String str = "xyz";
String query="Select * from tab where col like '%"+str+"%'"; -
I am considering using GROUP BY in several selects where I will access very big bsik, bsid, bsad, bsak, ecc...
My two concerns are performance and high data volume.
GROUP BY should reduce data arriving from DB but will it hurt performance?
What are your considerations?
Any ideas are welcome. I cannot seem to find anything specific for or against GROUP BY.
thanks,
PhillipHello Philip,
I would advise that you do not use the Group By clause in the select statements for these tables. As you have rightly said, the data volume in these tables might be a major cause for concern.
Using the Group By clause will heavily load the database server. while in the development environment, your query might run okay, in the productive environment it might crash. The reason : the cursor to the database would have timed out.
The Group By operation, as you know can be simulated after you get the data into the application server. The application server processing can take some time but you can usually optimize it.
Regards,
Anand Mandalika.
Maybe you are looking for
-
Creating first number in a sequence field
Hello. I'm used to SQL Server and I see there is no direct way to create identity fields in Oracle. I have an application that receive many messages that are inserted in a table. I need that to be the fastest I can do it. The table has a sequence num
-
How do you show the number of emails in Mail folders?
Hi, I'm a little lost with the new Mail format. It was really helpful for me to know the number of emails that were in my Mail folders - it used to tell me at the top. But since I upgraded to Yosemite, I don't see that as an option. Can someone tell
-
Transfer iPhone video to iPad
-
Will Logic 7.0.1 run on OS 10.4.6 or OS 10.4.8?
Hello all, I started a thread last night on this subject but I don't think I worded it correctly. Will Logic 7.0.1 run on OS 10.4.6 and above? I'm running 10.3.9 now with great results but need to upgrade my OS due to daisy chaining my Presonus FireP
-
[SOLVED]Chromium fails to load Amazon
Has anyone else had problems loading amazon.com on chromium? Been happening with ╔═[20:13]═[inxs @ wolfdale] ╚═══===═══[~]>> chromium --version Chromium 6.0.472.62 It started with the version before this one. Basically the moment you go to amazon.com