Using 3 tables in Merge function T-SQL
Hi there,
I have a source table A and one View ,Target table B
I was trying to write Merge function for incremental load I mean it is type1 load
Table A should look at View if it doesn't find the record then load into table B (Target)
the columns are ID and Type
if ID is same and type name is different then should update type in the Table B (Target)
I usually write merge function like below
Merge table x
using table y
on x.id=y.id
when matched then
update set x.description=y.description
when not matched then
insert (id,description)
values(id,description)
but I don't understand how to write in my scenario. can anyone guide me please.
Please try this.
Declare @MergeData Table
Col1 Int,
Col2 Varchar(10),
ActionDone Varchar(10)
Select * Into #Temp_SourceA From SourceA
Insert @MergeData
Select Col1,Col2,ActionDone
From
Merge #Temp_SourceA As Trgt
Using View_Source As Src
On Trgt.Col1 = Src.Col1
When Matched And
Trgt.Col2 <> Src.Col2
Then Update Set Trgt.Col2 = Src.Col2
When Not Matched Then
Insert (Col1,Col2)
Values (Src.Col1,Src.Col2)
Output $action ActionDone,Inserted.Col1,Inserted.Col2
)Data
Select * From @MergeData
Insert Into TargetA
Select Col1,Col2 From @MergeData Where ActionDone = 'INSERT'
Update T1
Set Col2 = T2.Col2
From TargetA T1
Inner Join @MergeData T2 On T1.Col1 = T2.Col1 And T2.ActionDone = 'Update'
Drop Table #Temp_SourceA
--Select * From View_Source
--Select * From SourceA
Please have look on the comment
Similar Messages
-
Using Create table command in a Pl/Sql Anonymous Block
Hi,
I need to create a table dynamically based on the table_name and column_names that a user wants. When I use a Pl/sql Anonymous block to do this, it complains. Any suggestions ?
Thanks,
MarisaPersonally this sounds like a bad design to me. I would say under most "normal" circumstances, you should not be creating tables on the fly. Especially one where a user has control over what columns,datatypes to use. Let a developer or dba take care of that.
-
Using Standard SAP Tables in SAP Tables, clusters or functions connections
Hi Gurus,
I am trying to use Standard SAP table like MARA, MAKT.. etc. in my crystal designer. When i make a new connection using SAP tables, Cluster or Functions, these tables are not listed.
Any configuration i have to maintain to list those standard table.?
With Regards,
Balachander.SDue to performance reasons there is a limitation regarding the number of the dispplayed table names. Once you are in the connection/table browser select a table and invoke the context menu by pressing the right mouse button. Select Options and in the options panel you can use wildcards in order to limit the results to the desired range (eg. use MA% to get a list of tables starting with MA). After you close the options panel press F5 and expand the connection entry again.
Regards,
Stratos -
How to read specific lines from a text file using external table or any other method?
Hi,
I have a text file with delimited data, I have to pick only odd number rows and load into a table...
Ex:
row1: 1,2,2,3,3,34,4,4,4,5,5,5,,,5 ( have to load only this row)
row2: 8,9,878,78,657,575,7,5,,,7,7
Hope this is enough..
I am using Oracle 11.2.0 version...
ThanksThere are various ways to do this. I would be inclined to use SQL*Loader. That way you can load it from the client or the server and you can use a SQL*Loader sequence to preserve the row order in the text file. I would load the whole row as a varray into a staging table, then use the TABLE and MOD functions to load the individual numbers from only the odd rows. Please see the demonstration below.
SCOTT@orcl12c> HOST TYPE text_file.csv
1,2,2,3,3,34,4,4,4,5,5,5,,,5
8,9,878,78,657,575,7,5,,,7,7
101,201
102,202
SCOTT@orcl12c> HOST TYPE test.ctl
LOAD DATA
INFILE text_file.csv
INTO TABLE staging
FIELDS TERMINATED BY ','
TRAILING NULLCOLS
(whole_row VARRAY TERMINATED BY '/n' (x INTEGER EXTERNAL),
rn SEQUENCE)
SCOTT@orcl12c> CREATE TABLE staging
2 (rn NUMBER,
3 whole_row SYS.OdciNumberList)
4 /
Table created.
SCOTT@orcl12c> HOST SQLLDR scott/tiger CONTROL=test.ctl LOG=test.log
SQL*Loader: Release 12.1.0.1.0 - Production on Tue Aug 27 13:48:37 2013
Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.
Path used: Conventional
Commit point reached - logical record count 4
Table STAGING:
4 Rows successfully loaded.
Check the log file:
test.log
for more information about the load.
SCOTT@orcl12c> CREATE TABLE a_table
2 (rn NUMBER,
3 data NUMBER)
4 /
Table created.
SCOTT@orcl12c> INSERT INTO a_table (rn, data)
2 SELECT s.rn,
3 t.COLUMN_VALUE data
4 FROM staging s,
5 TABLE (s.whole_row) t
6 WHERE MOD (rn, 2) != 0
7 /
17 rows created.
SCOTT@orcl12c> SELECT * FROM a_table
2 /
RN DATA
1 1
1 2
1 2
1 3
1 3
1 34
1 4
1 4
1 4
1 5
1 5
1 5
1
1
1 5
3 101
3 201
17 rows selected. -
Using User Defined Function is SQL
Hi
I did the following test to see how expensive it is to use user defined functions in SQL queries, and found that it is really expensive.
Calling SQRT in SQL costs less than calling a dummy function that just returns
the parameter value; this has to do with context switchings, but how can we have
a decent performance compared to Oracle provided functions?
Any comments are welcome, specially regarding the performance of UDF in sql
and for solutions.
create or replace function f(i in number) return number is
begin
return i;
end;
declare
l_start number;
l_elapsed number;
n number;
begin
select to_char(sysdate, 'sssssss')
into l_start
from dual;
for i in 1 .. 20 loop
select max(rownum)
into n
from t_tdz12_a0090;
end loop;
select to_char(sysdate, 'sssssss') - l_start
into l_elapsed
from dual;
dbms_output.put_line('first: '||l_elapsed);
select to_char(sysdate, 'sssssss')
into l_start
from dual;
for i in 1 .. 20 loop
select max(sqrt(rownum))
into n
from t_tdz12_a0090;
end loop;
select to_char(sysdate, 'sssssss') - l_start
into l_elapsed
from dual;
dbms_output.put_line('second: '||l_elapsed);
select to_char(sysdate, 'sssssss')
into l_start
from dual;
for i in 1 .. 20 loop
select max(f(rownum))
into n
from t_tdz12_a0090;
end loop;
select to_char(sysdate, 'sssssss') - l_start
into l_elapsed
from dual;
dbms_output.put_line('third: '||l_elapsed);
end;
Results:
first: 303
second: 1051
third: 1515
Kind regards
TaoufikI find that inline SQL is bad for performance but
good to simplify SQL. I keep thinking that it should
be possible somehow to use a function to improve
performance but have never seen that happen.inline SQL is only bad for performance if the database design (table structure, indexes etc.) is poor or the way the SQL is written is poor.
Context switching between SQL and PL/SQL for a User defined function is definitely a way to slow down performance.
Obviously built-in Oracle functions are going to be quicker than User-defined functions because they are written into the SQL and PL/SQL engines and are optimized for the internals of those engines.
There are a few things you can do to improve function
performance, shaving microseconds off execution time.
Consider using the NOCOPY hints for your parameters
to use pointers instead of copying values. NOCOPY
is a hint rather than a directive so it may or may
not work. Optimize any SQL in the called function.
Don't do anything in loops that does not have to be
done inside a loop.Well, yes, but it's even better to keep all processing in SQL where possible and only resort to PL/SQL when absolutely necessary.
The on-line documentation has suggested that using a
DETERMINISTIC function can improve performance but I
have not been able to demonstrate this and there are
notes in Metalink suggesting that this does not
happen. My experience is that DETERMINISTIC
functions always get executed. There's supposed to
be a feature in 11g that acually caches function
return values.Deterministic functions will work well if used in conjunction with a function based index. That can improve access times when querying data on the function results.
You can use DBMS_PROFILER to get run-time statistics
for each line of your function as it is executed to
help tune it.Or code it as SQL. ;) -
Using a PL/SQL function in SQL
Hello
I wrote a function in pl/sql that create a table and return the name of the table as varchar2.
The function is AUTONOMOUS TRANSACTION.
I want to write a SQL that in the �FROM� section will call the function.
The SQL will take the name of the table (from the function) and recognize it as a table.
That way I can preform a join between the new table and other tables.
Do you know on a way to preform that?
I wrote the following SQL:
(assume the name of the function is � �New_table()� return varchar2 )
Select *
From (select �New_Table()� from dual) ;
This return the name of the table. But I want a SQL that will bring the data from the table
Apreciate your help
HilitYou say "I wrote a function in pl/sql that create a table and return the name of the table as varchar2.", now you want to select data from the newly created table in the same sql statement?
Although you could cobble something together in PL/SQL using EXECUTE IMMEDIATE, or in a sql script using several statements that would work, where will the data come from?
I'm guessing that you are populating the table in the function and are hoping to use it as something similar to a temp table in Sybase/SqlServer. If so, there are almost certainly several better ways to do this in Oracle.
Why don't you tell us what you are trying to accomplish by building a table in the FROM clause of a sql statement, and maybe someone can give a better solution.
TTFN
John -
Using 'Function Returning SQL Query' with Flash charts
I have created a pl/sql function that returns a SQL query as a varchar2 of this form:
select null link
<x value> value
<Series1 y value> Series 1 Label
<Series2 y value> Series 2 Label
<Series3 y value> Series 3 Label
from tablea a
join tableb b
on a.col = b.col
order by <x value>
If I now call the function from a Flash Chart Series SQL box with the Query Source Type set to 'Function Returning SQL Query' like this:
return functionname(to_date('30-sep-2010', 'dd-mon-yyyy'))
it parses correctly and the page is saved; however, when I run the page I don't get any output - nor any error messages or other indication of a problem.
Now, if I call the function in a SQL client, capture the SQL query output using dbms_output and paste that into the Flash Chart Series SQL box - changing the Query Source Type to SQL Query - and save the page it works fine when I run it and returns a multi-series flash chart.
Can anyone suggest either;
1. What have I might have missed or done wrong?
2. Any way to usefully diagnose the problem...
I have tried using the Apex debugger - which is very nice, by the way - but it doesn't provide any info on what my problem might be. I even tried writing my own debug messages from my function using the apex_debug_message package - got nothing...
Thanks,
EricHi Eric,
Try expressing the source as this:
begin
return functionname(to_date('30-sep-2010', 'dd-mon-yyyy'));
end;That works fine for me, and if I take out the begin-end and the trailing semicolon from the return statement I get the same behavior as you.
It does mention in the help for the source (only during the wizard though) that this source type has to be expressed that way, but I agree it would be helpful if the tool would validate for this format when 'Function Returning SQL Query' is used or give some sort of indication of the trouble. Anyway, this should get you going again.
Hope this helps,
John
If you find this information useful, please remember to mark the post "helpful" or "correct" so that others may benefit as well. -
ESB problem when use merge function in master/detail relationship
I have some problem with the merge function in database adapter.
details:
I have 2 tables in master/detail relationship, both have GUID column as a primary key (GUID generated by ESB).
'car_group' table
pk : guid
unique : group_no, datadate, datatime
===============================
guid, group_no, datadate, datatime, group_detail
===============================
1, 1, 01/01/2008, 09:00, groupdetail01
2, 1, 01/01/2008, 10:00, groupdetail02
'car_group_detail' table
pk : guid
fk : car_group_guid link to car_group.guid
==================
guid, car_group_guid, detail
==================
1, 1, detail01
2, 1, detail02
3, 2, detail03
4, 2, detail04
I used a file adapter as a input, here is an example text file
M, 1, 01/01/2008, 09:00, groupdetail01
D, detail01
D, detail02
M, 1, 01/01/2008, 10:00, groupdetail02
D, detail03
D, detail04
Because I used merge function so I need to specify the columns that will be a condition for the insert/update,
but I have generate GUID as a primary key in the tables, I can't used it, so in toplink I map my unique key as a primary key.
The insert operation work fine but when update is required (for example, try to change 'group_detail' column of the master table in text file),
an SQLException thrown, the log file shown the adapter try to update the GUID column of master table but the constraint not allowed.
And yes, cause of GUID generate everytime so ESB try to update this column also, but I don't want to do like that, I need something
like when update operation required, just ignored the GUID column.
I try to mark read-only to the GUID column in Toplink mapping file but still have the same problem, it still generate UPDATE statement
with the GUID column, and also I try to let the database trigger to generate GUID instead of ESB function, but it not works in master/detail
relationship (I think the Toplink manage the relationship, is it right ?)
Please advise, thanks for advance.somebody please help, thanks !!!
-
Using Pipeline Table functions with other tables
I am on DB 11.2.0.2 and have sparingly used pipelined table functions but am considering it for a project that has some fairly big (lots of rows) sized tables. In my tests, selecting from just the pipelined table perform pretty well (whether it is directly from the pipleined table or the view I created on top of it). Where I start to see some degregation when I try to join the pipelined tabe view to other tables and add where conditions.
ie:
SELECT A.empno, A.empname, A.job, B.sal
FROM EMP_VIEW A, EMP B
WHERE A.empno = B.empno AND
B.mgr = '7839'
I have seen some articles and blogs that mention this as a cardinality issue, and offer some undocumented methods to try and combat.
Can someone please give me some advice or tips on this. Thanks!
I have created a simple example using the emp table below to help illustrate what I am doing.
DROP TYPE EMP_TYPE;
DROP TYPE EMP_SEQ;
CREATE OR REPLACE TYPE EMP_SEQ AS OBJECT
( EMPNO NUMBER(10),
ENAME VARCHAR2(100),
JOB VARCHAR2(100));
CREATE OR REPLACE TYPE EMP_TYPE AS TABLE OF EMP_SEQ;
CREATE OR REPLACE FUNCTION get_emp return EMP_TYPE PIPELINED AS
BEGIN
FOR cur IN (SELECT
empno,
ename,
job
FROM emp
LOOP
PIPE ROW(EMP_SEQ(cur.empno,
cur.ename,
cur.job));
END LOOP;
RETURN;
END get_emp;
create OR REPLACE view EMP_VIEW as select * from table(get_emp());
SELECT A.empno, A.empname, A.job, B.sal
FROM EMP_VIEW A, EMP B
WHERE A.empno = B.empno AND
B.mgr = '7839'I am on DB 11.2.0.2 and have sparingly used pipelined table functions but am considering it for a project that has some fairly big (lots of rows) sized tables
Which begs the question: WHY? What PROBLEM are you trying to solve and what makes you think using pipelined table functions is the best way to solve that problem?
The lack of information about cardinality is the likely root of the degradation you noticed as already mentioned.
But that should be a red flag about pipelined functions in general. PIPELINED functions hide virtually ALL KNOWLEDGE about the result set that is produced; cardinality is just the tip of the iceberg. Those functions pretty much say 'here is a result set' without ANY information about the number of rows (cardinality), distinct values for any columns, nullability of any columns, constraints that might apply to any columns (foreign key, primary key) and so on.
If you are going to hide all of that information from Oracle that would normally be used to help optimize queries and select the appropriate execution plan you need to have a VERY good reason.
The use of PIPELINED functions should be reserved for those use cases where ordinary SQL and PL/SQL cannot get the job done. That is they are a 'special case' solution.
The classic use case for those functions is for the transform stage of ETL where multiple pipelined functions are chained together: one function feeds its rows to the next function which feeds its rows to another and so on. Each of those 'chained' functions is roughly analogous to a full table scan of the data that often does not need to be joined to other data except perhaps low volumn lookup tables where the data may even be cached.
I suggest that any exploratory or prototyping work you do use standard relational tables until such point as you run into a problem whose solution might require PIPELINED functions to solve. -
Doubt in using field catalog merge function
hi all,
When I am using the function maodule reuse_alv_fieldcatalog_merge for building the field catalog in alv list,it was giving abend message as
'Field catalog is empty'.
what might be the reason for such message?can some one help me out with the solution to get rid of that.
I cant populate the catalog manually because I need to display nearly 40 fields in the output.
Thanks in advance.hI
Supports the creation of the field catalog for the ALV function modules
based either on a structure or table defined in the ABAP Data
Dictionary, or a program-internal table.
The program-internal table must either be in a TOP Include or its
Include must be specified explicitly in the interface.
The variant based on a program-internal table should only be used for
rapid prototyping since the following restrictions apply:
o Performance is affected since the code of the table definition must
always be read and interpreted at runtime.
o Dictionary references are only considered if the keywords LIKE or
INCLUDE STRUCTURE (not TYPE) are used.
If the field catalog contains more than 90 fields, the first 90 fields
are output in the list by default whereas the remaining fields are only
available in the field selection.
If the field catalog is passed with values, they are merged with the
'automatically' found information.
Below is an example ABAP program which will populate a simple internal table(it_ekpo) with data and
display it using the basic ALV grid functionality(including column total). The example details the main
sections of coding required to implement the ALV grid functionality:
Data declaration
Data retrieval
Build fieldcatalog
Build layout setup
*& Report ZDEMO_ALVGRID *
*& Example of a simple ALV Grid Report *
*& The basic requirement for this demo is to display a number of *
*& fields from the EKKO table. *
REPORT zdemo_alvgrid .
TABLES: ekko.
type-pools: slis. "ALV Declarations
*Data Declaration
TYPES: BEGIN OF t_ekko,
ebeln TYPE ekpo-ebeln,
ebelp TYPE ekpo-ebelp,
statu TYPE ekpo-statu,
aedat TYPE ekpo-aedat,
matnr TYPE ekpo-matnr,
menge TYPE ekpo-menge,
meins TYPE ekpo-meins,
netpr TYPE ekpo-netpr,
peinh TYPE ekpo-peinh,
END OF t_ekko.
DATA: it_ekko TYPE STANDARD TABLE OF t_ekko INITIAL SIZE 0,
wa_ekko TYPE t_ekko.
*ALV data declarations
data: fieldcatalog type slis_t_fieldcat_alv with header line,
gd_tab_group type slis_t_sp_group_alv,
gd_layout type slis_layout_alv,
gd_repid like sy-repid.
*Start-of-selection.
START-OF-SELECTION.
perform data_retrieval.
perform build_fieldcatalog.
perform build_layout.
perform display_alv_report.
*& Form BUILD_FIELDCATALOG
* Build Fieldcatalog for ALV Report
form build_fieldcatalog.
* There are a number of ways to create a fieldcat.
* For the purpose of this example i will build the fieldcatalog manualy
* by populating the internal table fields individually and then
* appending the rows. This method can be the most time consuming but can
* also allow you more control of the final product.
* Beware though, you need to ensure that all fields required are
* populated. When using some of functionality available via ALV, such as
* total. You may need to provide more information than if you were
* simply displaying the result
* I.e. Field type may be required in-order for
* the 'TOTAL' function to work.
fieldcatalog-fieldname = 'EBELN'.
fieldcatalog-seltext_m = 'Purchase Order'.
fieldcatalog-col_pos = 0.
fieldcatalog-outputlen = 10.
fieldcatalog-emphasize = 'X'.
fieldcatalog-key = 'X'.
* fieldcatalog-do_sum = 'X'.
* fieldcatalog-no_zero = 'X'.
append fieldcatalog to fieldcatalog.
clear fieldcatalog.
fieldcatalog-fieldname = 'EBELP'.
fieldcatalog-seltext_m = 'PO Item'.
fieldcatalog-col_pos = 1.
append fieldcatalog to fieldcatalog.
clear fieldcatalog.
fieldcatalog-fieldname = 'STATU'.
fieldcatalog-seltext_m = 'Status'.
fieldcatalog-col_pos = 2.
append fieldcatalog to fieldcatalog.
clear fieldcatalog.
fieldcatalog-fieldname = 'AEDAT'.
fieldcatalog-seltext_m = 'Item change date'.
fieldcatalog-col_pos = 3.
append fieldcatalog to fieldcatalog.
clear fieldcatalog.
fieldcatalog-fieldname = 'MATNR'.
fieldcatalog-seltext_m = 'Material Number'.
fieldcatalog-col_pos = 4.
append fieldcatalog to fieldcatalog.
clear fieldcatalog.
fieldcatalog-fieldname = 'MENGE'.
fieldcatalog-seltext_m = 'PO quantity'.
fieldcatalog-col_pos = 5.
append fieldcatalog to fieldcatalog.
clear fieldcatalog.
fieldcatalog-fieldname = 'MEINS'.
fieldcatalog-seltext_m = 'Order Unit'.
fieldcatalog-col_pos = 6.
append fieldcatalog to fieldcatalog.
clear fieldcatalog.
fieldcatalog-fieldname = 'NETPR'.
fieldcatalog-seltext_m = 'Net Price'.
fieldcatalog-col_pos = 7.
fieldcatalog-outputlen = 15.
fieldcatalog-do_sum = 'X'. "Display column total
fieldcatalog-datatype = 'CURR'.
append fieldcatalog to fieldcatalog.
clear fieldcatalog.
fieldcatalog-fieldname = 'PEINH'.
fieldcatalog-seltext_m = 'Price Unit'.
fieldcatalog-col_pos = 8.
append fieldcatalog to fieldcatalog.
clear fieldcatalog.
endform. " BUILD_FIELDCATALOG
*& Form BUILD_LAYOUT
* Build layout for ALV grid report
form build_layout.
gd_layout-no_input = 'X'.
gd_layout-colwidth_optimize = 'X'.
gd_layout-totals_text = 'Totals'(201).
* gd_layout-totals_only = 'X'.
* gd_layout-f2code = 'DISP'. "Sets fcode for when double
* "click(press f2)
* gd_layout-zebra = 'X'.
* gd_layout-group_change_edit = 'X'.
* gd_layout-header_text = 'helllllo'.
endform. " BUILD_LAYOUT
*& Form DISPLAY_ALV_REPORT
* Display report using ALV grid
form display_alv_report.
gd_repid = sy-repid.
call function 'REUSE_ALV_GRID_DISPLAY'
exporting
i_callback_program = gd_repid
* i_callback_top_of_page = 'TOP-OF-PAGE' "see FORM
* i_callback_user_command = 'USER_COMMAND'
* i_grid_title = outtext
is_layout = gd_layout
it_fieldcat = fieldcatalog[]
* it_special_groups = gd_tabgroup
* IT_EVENTS = GT_XEVENTS
i_save = 'X'
* is_variant = z_template
tables
t_outtab = it_ekko
exceptions
program_error = 1
others = 2.
if sy-subrc <> 0.
* MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
* WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
endif.
endform. " DISPLAY_ALV_REPORT
*& Form DATA_RETRIEVAL
* Retrieve data form EKPO table and populate itab it_ekko
form data_retrieval.
select ebeln ebelp statu aedat matnr menge meins netpr peinh
up to 10 rows
from ekpo
into table it_ekko.
endform. " DATA_RETRIEVAL -
How can I get a list of values (one or more) used in the WHERE filter of stored procedures and functions in SQL Server?
How can get a list of values as shown (highlighted) in the sample stored procedure below?
ALTER PROC [dbo].[sp_LoanInfo_Data_Extract] AS
SELECT [LOAN_ACCT].PROD_DT,
[LOAN_ACCT].ACCT_NBR,
[LOAN_NOTE2].OFCR_CD,
[LOAN_NOTE1].CURR_PRIN_BAL_AMT,
[LOAN_NOTE2].BR_NBR,
INTO #Table1
FROM
dbo.[LOAN_NOTE1],
dbo.[LOAN_NOTE2],
dbo.[LOAN_ACCT]
WHERE
[LOAN_ACCT].PROD_DT = [LOAN_NOTE1].PROD_DT
and
[LOAN_ACCT].ACCT_NBR = [LOAN_NOTE1].ACCT_NBR
and
[LOAN_NOTE1].PROD_DT = [LOAN_NOTE2].PROD_DT
and
[LOAN_NOTE1].MSTR_ACCT_NBR = [LOAN_NOTE2].MSTR_ACCT_NBR
and
[LOAN_ACCT].PROD_DT = '2015-03-10'
and
[LOAN_ACCT].ACCT_STAT_CD IN
('A','D')
and
[LOAN_NOTE2].LOAN_STAT_CD IN
('J','Z')
LenfinkelHi LenFinkel,
May I know what is purpose of this requirement, as olaf said,you may parse the T-SQL code (or the execution plan), which is not that easy.
I have noticed that the condition values in your Stored Procedure(SP) are hard coded and among them there is a date values, I believe some day you may have to alter the SP when the date expires. So why not declare 3 parameters of the SP instead hard coding?
For multiple values paramter you can use a
table-valued parameter. Then there's no problem getting the values.
If you could elaborate your purpose, we may help to find better workaround.
Eric Zhang
TechNet Community Support -
How to use the Table Function defined in package in OWB?
Hi,
I defined a table function in a package. I am trying to use that in owb using Table function operator. But I came to know that, owb R1 supports only standalone table functions.
Is there any other way to use the table function defined in a package. As like we create synonyms for functions, is there any other way to do this.
I tryed to create synonyms, it is created. But it is showing compilation error. Finally I found that, we can't create synonyms for functions which are defined in packages.
Any one can explain it, how to resolve this problem.
Thank you,
Regards
Gowtham Sen.Hi Marcos,
Thank you for reply.
OWB R1 supports stand alone table functions. Here what I mean is, the table fucntion which is not inculded in any package is a stand alone table function.
for example say sample_tbl_fn is a table function. It is defined as a function.It is a stand alone function. We call this fucntion as "samp_tbl_fn()";
For exampe say sample_pkg is a package. say a function is defined in a package.
then we call that function as sample_pkg.functionname(); This is not a stand alone function.
I hope you understand it.
owb supports stand alone functions.
Here I would like to know, is there any other way to use the functions which are defined in package. While I am trying to use those functions (which are defined in package -- giving the name as packagename.functionname) it is throwing an error "Invalid object name."
Here I would like know, is there any other way to use the table functions which are defined in a package.
Thank you,
Regards,
Gowtham Sen. -
How to use Table valued MSSQL function in OBIEE
Hi all,
Can some one help me to understand how to use table valued function in OBIEE? I want to use a table valued function (MSSQL function, with some input parameter), in the physical layer to pull the data?
I know for MSSQL Stored Procedure we can write as
EXEC SP_NAME @Parameter = 'VLUEOF(NQ_SESSION.Variablename)'
but now I have a table valued function in the query window I can get the data as
select * from myfunction(parametervalue)
In physical layer of OBIEE I have tried as
select * from myfunction('VLUEOF(NQ_SESSION.Variablename)'), but I'm getting error as the NQ_SESSION variable doesn't have a value , but actually I have initialized the variable but still Im getting error.
Can some one help me to solve this.
Thanks,
MithunFollow this link and try yourself. let me know for issues
Substring instr issue in obiee
Appreciate if you mark
Edited by: Srini VEERAVALLI on Feb 20, 2013 8:13 AM -
Is there a way of using a mail merge function while on the iphone or ipad, i wish to email a "Word style, ot TXT" document to 250 of my contacts, I have tried downloading my contacts to my PC's outlook but only 1 contact comes across at a time despite the fact that Icloud says downloading namedperson + 249 other contacts to a CSV file
Hi everyone!
Looking also for an app that allows me to merge email and send them out to each recipient individually. Apparently that's not possible yet. Here's what the guys at RedbitsApps told me about Group Email capabilities:
"The current version of the app relies on the device operating system to send the emails. For this reason, sending individual email instead of a single email to multiple recipients is not possible. Apple doesn't allow apps to send single emails to many recipients easily. We may use a custom sending software in a future version."
Let's keep looking guys... -
How to process each records in the derived table which i created using cte table using sql server
I want to process each row from the CTE table I created, how can I traverse from first row to second row and so on....
how to process each records in the derived table which i created using cte table using sql serverIdeally you would be doing a set based processing rather than traversing row by row as thats more efficient. To answer it specific to your scenario we may need more info. Can you explain with some sample data your exact requirement?
Please Mark This As Answer if it solved your issue
Please Mark This As Helpful if it helps to solve your issue
Visakh
My MSDN Page
My Personal Blog
My Facebook Page
Maybe you are looking for
-
I've been having this problem in my Blackberry Curve 9300 since 2 weeks ago. After I updated my Twitter app, I started having this problem: The applications (Twitter, Facebook, Viber and even the App World) won't run even if I'm connected to WiFi. Ac
-
Some text on my screen appears a little bit "higher"
Hello everyone Today i noticed that the date-time and the browser type line appears a little bit "higher". This happened after a restart i think, but i'm not sure. Here is what i mean: Links to screenshots: http://imageshack.us/photo/my-images/825/sc
-
I've been dabbling in the Powershell BizTalk Provider Extensions for some automation of deployment and such. So far it's pretty cool on a local machine, but the real benefits to me would be to do it remotely. Does anyone have any experience in the wi
-
its more than 24 hours and i havent received the claim of mountain lion for my new macbook pro
-
Assign apps to classroom iPads without Apple IDs
Our district is integrating Profile Manager on OSX Server, bound to AD, with 50+ iPads. They will be handed out to certain K-12 classrooms. I was hoping once myself or a teacher was authenticated against AD, and the iPad auto enrolled, I could assign