Table value parameters Question
I would like to use a table valued parameter to create an invoice. It will insert into the invoice header data table the header and then it will insert the invoice detail all with different tables. The problem is that until now I was doing this using 2 separate
procedures
1. insert into invoice header
2. insert into invoice detail - this was done with a foreach loop
foreach row in the invoice insert into invoice detail and it was also adjusting the inventory like so:
USE [Trial]
GO
/****** Object: StoredProcedure [dbo].[ARD_Insert] Script Date: 02/25/2014 15:23:22 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
-- =============================================
-- Author: Debra
-- Create date: January 28, 2013
-- Description: Insert into the ARD table
-- =============================================
ALTER PROCEDURE [dbo].[ARD_Insert]
-- Add the parameters for the stored procedure here
@InvoiceARD int,
@Item int,
@Description nvarchar(50),
@Qty int,
@Price decimal(10,2),
@TempWOID nvarchar(5),
@InventoryQuantityAddSubtract int
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- Insert statements for procedure here
--BEGIN TRAN
INSERT INTO ARD(INVOICE,ITEM,[DESC],QTY,PRICE,TEMPWOID)
VALUES(@InvoiceARD, @Item, @Description, @Qty, @Price, @TempWOID)
DECLARE @Type nvarchar(3)
SET @Type = (SELECT TYPE FROM INV WHERE ITEM = @Item)
IF(@InventoryQuantityAddSubtract = 0) --Add
BEGIN
EXEC InventoryQuantityAdd_Update @Type,@Qty,@Item
END
ELSE IF(@InventoryQuantityAddSubtract = 1)--Subtract
BEGIN
EXEC InventoryQuantitySubtract_Update @Type,@Qty,@Item
END
--COMMIT TRAN
END
how would I be able to do this using table valued parameters
Debra has a question
This is the iventoryquantity stored procedure
USE [Trial]
GO
/****** Object: StoredProcedure [dbo].[InventoryQuantityAdd_Update] Script Date: 02/25/2014 15:43:40 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
-- =============================================
-- Author: Debra
-- Create date: October 30, 2013
-- Description: Add quantity to inventory.
-- =============================================
ALTER PROCEDURE [dbo].[InventoryQuantityAdd_Update]
-- Add the parameters for the stored procedure here
@Type nvarchar(3),
@Qty int,
@Item int
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- Insert statements for procedure here
UPDATE INV SET ONHAND = ONHAND + @Qty,STAMPED = CASE WHEN @Type = 'CIG' THEN STAMPED + @Qty ELSE 0 END,
LASTDATE = CONVERT(DATE, GETDATE()) WHERE ITEM = @Item
END
Debra has a question
Similar Messages
-
The last I heard on the efforts to make TVPs writable was that they were on the roadmap for the 2008 R2 release but that it didn't make the cut.
Srini Acharya commented in the connect item associated with this feature that...
Allowing table valued parameters to be read/write involves quite a bit of work on the SQL Engine
side as well as client protocols. Due to time/resource constraints as well as other priorirites, we will not be able to take up this work as part of SQL Server 2008 release. However, we have investigated this issue and have this firmly in our radar to address
as part of the next release of SQL Server.
I have never heard any information regarding why this was pulled from the 2008R2 release and why it wasn't implemented in either SQL Server 2012 or SQL Server 2014. Can anyone shed any light on what's going on here and why it hasn't been enabled
yet? I've been champing at the bit for the better part of 6 years now to be able to move my Data Access Methodology to a more properly structured message oriented architecture using Request and Response Table Types for routing messages to and from SQL
Server Functions and Stored Procs.
Please tell me that I won't have to manually build all of this out with XML for much longer.
Note that in SQL Server 2008 table valued parameters are read only. But as you notice we actually
require you to write READONLY. So that actually then means that at some point in the future maybe if you say please, please please often enough we might be able to actually make them writable as well at some point.
Please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please,
please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please,
please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please,
please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please,
please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please,
please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please,
please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please,
please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please,
please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please, please,
please!Can someone please explain what the complication is?
It makes no sense to me that you can
1)declare a table typed variable inside a stored procedure
2)insert items into it
3)return the contents of it with a select from that table variable
but you can't say "hey. The OUTPUT parameter that was specified by the calling client points to this same variable."
I would like to understand what is so different between
create database [TechnetSSMSMessagingExample]
create schema [Resources]
create schema [Messages]
create schema [Services]
create type [Messages].[GetResourcesRequest] AS TABLE([Value] [varchar](max) NOT NULL)
create type [Messages].[GetResourcesResponse] AS TABLE([Resource] [varchar](max) NOT NULL, [Creator] [varchar](max) NOT NULL,[AccessedOn] [datetime2](7) NOT NULL)
create table [Resources].[Contrivance] ([Value] [varchar](max) NOT NULL, [CreatedBy] [varchar](max) NOT NULL) ON [PRIMARY]
create Procedure [Services].[GetResources]
(@request [Messages].[GetResourcesRequest] READONLY)
AS
DECLARE @response [Messages].[GetResourcesResponse]
insert @response
select [Resource].[Value] [Resource]
,[Resource].[CreatedBy] [Creator]
,GETDATE() [AccessedOn]
from [Resources].[Contrivance]
inner join @request as [request]
on [Resource].[Value] = [request].[Value]
select [Resource],[Creator],[AccessedOn]
from @responseGO
and
create Procedure [Services].[GetResources]
( @request [Messages].[GetResourcesRequest] READONLY
,@response [Messages].[GetResourcesResponse] OUTPUT)
AS
insert @response
select [Resource].[Value] [Resource]
,[Resource].[CreatedBy] [Creator]
,GETDATE() [AccessedOn]
from [Resources].[Contrivance]
inner join @request as [request]
on [Resource].[Value] = [request].[Value]
GO
that this cannot be accomplished in 7 years with 3 major releases of SQL Server.
If you build the database that I provided (I didn't provide flow control commands, of course so they'll need to be chunked into individual executable scripts) and then
insert into [Resources].[Contrivance] values('Arbitrary','kalanbates')
insert into [Resources].[Contrivance] values('FooBar','kalanbates')
insert into [Resources].[Contrivance] values('NotInvolvedInSample','someone-else')
GO
DECLARE @request [Message].[GetResourcesRequest]
insert into @request
VALUES ('Arbitrary')
,('FooBar')
EXEC [Services].[GetResources] @request
your execution will return a result set containing 2 rows.
Why can these not 'just' be pushed into a "statically typed" OUTPUT parameter rather than being returned as a loose result set that then has to be sliced and diced as a dynamic object by the calling client? -
Use of Table Valued Parameter to restore databases
I'm a noob with table valued parameters. Not sure if I can use TVP for what I need to do. I want to restore/refresh multiple databases from arbitrary number of .BAK files. I can successfully populate a TVP with the needed
source information which includes:
Database name
File/device name (i.e., xxx.BAK file)
Logical data file name
Logical log file name
Now I want to create a stored procedure that contains Restore Database command like this:
RESTORE DATABASE <@database name>
FROM <@path and name of .bak file>
WITH MOVE <@logical data file> TO <new path and file name>,
MOVE <@logical log file name> TO <new path and file name>;
Can I replace those variables with the column values in the TVP? I'm not sure because all the stored proc examples I see simply insert rows from the TVP into rows of an existing table.Yes, but you would need to run a cursor of your TVP:
DECLARE cur CURSOR STATIC LOCAL FOR
SELECT db, path, logical_data_file, new_data_path, logical_log_file,
new_logfile_apth
FROM @TVP
OPEN cur
WHILE 1 = 1
BEGIN
FETCH cur INTO @db, @path, @logical_data_file, @new_data_path,
@new_logical_log_file, @new_logfile_path
IF @@fetch_status <> 0
BREAK
RESTORE DATABASE @db FROM DISK = @path
WITH MOVE @logical_data_file TO @new_data_path,
MOVE @new_logical_log_file, @new_logfile_pth
END
DEALLOCATE cur
Erland Sommarskog, SQL Server MVP, [email protected] -
Introduction
In SQL Server Reporting Services, we can define a mapping between the fields that are returned in the query to specific delivery options and to report parameters in a data-driven subscription.
For a report with a parameter (such as YEAR) that allow multiple values, when creating a data-driven subscription, how can we pass a record like below to show correct data (data for year 2012, 2013 and 2014).
EmailAddress Parameter
Comment
[email protected] 2012,2013,2014 NULL
In this article, I will demonstrate how to configure a Data Driven Subscription which get multi-value parameters from one column of a database table
Workaround
Generally, if we pass the “Parameter” column to report directly in the step 5 when creating data-driven subscription.
The value “2012,2013,2014” will be regarded as a single value, Reporting Services will use “2012,2013,2014” to filter data. However, there are no any records that YEAR filed equal to “2012,2013,2014”, and we will get an error when the subscription executed
on the log. (C:\Program Files\Microsoft SQL Server\MSRS10_50.MSSQLSERVER\Reporting Services\LogFiles)
Microsoft.ReportingServices.Diagnostics.Utilities.InvalidReportParameterException: Default value or value provided for the report parameter 'Name' is not a valid value.
This means that there is no such a value on parameter’s available value list, this is an invalid parameter value. If we change the parameter records like below.
EmailAddress Parameter Comment
[email protected] 2012 NULL
[email protected] 2013 NULL
[email protected] 2014 NULL
In this case, Reporting Services will generate 3 reports for one data-driven subscription. Each report for only one year which cannot fit the requirement obviously.
Currently, there is no a solution to solve this issue. The workaround for it is that create two report, one is used for view report for end users, another one is used for create data-driven subscription.
On the report that used create data-driven subscription, uncheck “Allow multiple values” option for the parameter, do not specify and available values and default values for this parameter. Then change the Filter
From
Expression:[ParameterName]
Operator :In
Value :[@ParameterName]
To
Expression:[ParameterName]
Operator :In
Value :Split(Parameters!ParameterName.Value,",")
In this case, we can specify a value like "2012,2013,2014" from database to the data-driven subscription.
Applies to
Microsoft SQL Server 2005
Microsoft SQL Server 2008
Microsoft SQL Server 2008 R2
Microsoft SQL Server 2012
Please click to vote if the post helps you. This can be beneficial to other community members reading the thread.For every Auftrag, there are multiple Position entries.
Rest of the blocks don't seems to have any relation.
So you can check this code to see how internal table lt_str is built whose first 3 fields have data contained in Auftrag, and next 3 fields have Position data. The structure is flat, assuming that every Position record is related to preceding Auftrag.
Try out this snippet.
DATA lt_data TYPE TABLE OF string.
DATA lv_data TYPE string.
CALL METHOD cl_gui_frontend_services=>gui_upload
EXPORTING
filename = 'C:\temp\test.txt'
CHANGING
data_tab = lt_data
EXCEPTIONS
OTHERS = 19.
CHECK sy-subrc EQ 0.
TYPES:
BEGIN OF ty_str,
a1 TYPE string,
a2 TYPE string,
a3 TYPE string,
p1 TYPE string,
p2 TYPE string,
p3 TYPE string,
END OF ty_str.
DATA: lt_str TYPE TABLE OF ty_str,
ls_str TYPE ty_str,
lv_block TYPE string,
lv_flag TYPE boolean.
LOOP AT lt_data INTO lv_data.
CASE lv_data.
WHEN '[Version]' OR '[StdSatz]' OR '[Arbeitstag]' OR '[Pecunia]'
OR '[Mita]' OR '[Kunde]' OR '[Auftrag]' OR '[Position]'.
lv_block = lv_data.
lv_flag = abap_false.
WHEN OTHERS.
lv_flag = abap_true.
ENDCASE.
CHECK lv_flag EQ abap_true.
CASE lv_block.
WHEN '[Auftrag]'.
SPLIT lv_data AT ';' INTO ls_str-a1 ls_str-a2 ls_str-a3.
WHEN '[Position]'.
SPLIT lv_data AT ';' INTO ls_str-p1 ls_str-p2 ls_str-p3.
APPEND ls_str TO lt_str.
ENDCASE.
ENDLOOP. -
In which condition Table valued function should prefer over SP except use in joins?
Hi,
My requirements is:
Entity framework needs to call DB object (TVF or SP), which will provide some data to them and they'll work on it at app level.
DB object would be simple, one result set, it will join 5 tables and get around 30 columns to them. it would be parameterized query so can't use view.
Now my question is what DB object would be best to use, table valued function or store procedure. and why?
I google on it, I find some interesting links (example http://technet.microsoft.com/en-us/library/ms187650(v=sql.105).aspx)
they mentioned conditions to convert SP to TVF but not mentioned the reason, why I should convert?
Both have same cache plans strategy. SP has so many advantages over TVF, but I don't see any technical advantage of TVF over SP except it can be use in joins or so.
so In short my question is, why I can't use SP in all cases, why I would use TVF?, and which Table valued or multi-valued?
would appreciate your time and response.According to a few recent blogs you should be able to use TVP or stored procedure with EF 6.1.2 with ease. In our application we haven't switched yet to 6.1.2 (we're using 6.0.0) and there is no support for stored procedures or functions so we use StoreQuery.
I am wondering if you can share your experience of using EF with SP or TVP (and document the steps).
I am also wondering as how exactly it's working behind the scenes and where the full query is taking place. Say, in our case we may want to add some extra conditions after retrieving a set using, say, SP. Would the final query execute on the client (e.g.
SP executed on the server, result returned and then extra conditions executed on the "client")?
As I said, right now we're using StoreQuery which means that our extra conditions must be case - sensitive as opposed to SQL Server case insensitive. So, if someone already tried that scenario and can tell me how exactly it works, I would appreciate it.
Another question about EF - I defined a property as
[Column(TypeName = "varchar")]
public string PhoneNumber { get; set; } // area code + phone
and in the LINQ query as
var query = db.Accounts.Select(a => new AccountsList
AcctName = a.AcctName,
Contact = a.Contact,
FullName = a.FullName,
AreaCode = a.AreaCode,
Phone = a.Phone,
Hidden = a.Hide,
Hide = a.Hide,
PhoneNumber = a.AreaCode.Trim() + a.Phone.Trim(),
AcctTypeId = a.AcctTypeId
and I see that it's translated into CASE AreaCode IS NULL THEN N'' ELSE RTRIM(LTRIM(areaCode)) END + ...
My question is - why EF does it if there is no mentioning at all in the class as how NULL is supposed to be treated. Is it a bug?
For every expert, there is an equal and opposite expert. - Becker's Law
My blog
My TechNet articles -
Table Values in Export/Import/Tables RFC calls
Hi
I know that using Adaptive RFC, the best practice is to use the Tables section of the function rather than the import/export.
However, in ECC6.0, when creating entries in the Tables section of the FM it tells me that this section is obsolete.
Should I start to use the export/import parameters instead?
Cheers
IanHi Ashu
Thanks for the reply but there is no code snippet. It is a 'best practice' question.
The document <a href="https://www.sdn.sap.comhttp://www.sdn.sap.comhttp://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/11c3b051-0401-0010-fe9a-9eabd9c216de">Effective Web Dynpro - Adaptive RFC</a> details that table values should not be passed in the exporting/importing parameters of the R/3 RFC enabled function module. THey should always be added to the Tables section.
However, in ECC6.0, when adding entries into the Tables section of an R/3 function module, it says that this practice is obsolete.
Therefore, what is the current best practice for Adaptive RFC??
Cheers
Ian -
Problem with dg4msql and table-valued functions
Have configured dg4msql to connect from my oracle db to ms sql server.
Am able to do simple SELECTs from ms sql tables like:
select * from "sys"."procedures"@dg4msql;
or
select * from "dbo"."SomeTable"@dg4msql;
But am unable to do a SELECT from a table-valued function:
select * from "dbo"."GetData"@dg4msql('param1value');
or
select * from "dbo"."GetData"('param1value')@dg4msql;
In both cases I get:
ORA-00933: SQL command not properly ended
It does not like the parameters portion of the query ("('param1value')")
initdg4msql.ora:
HS_FDS_CONNECT_INFO=[svr1]//mydb
HS_FDS_TRACE_LEVEL=OFF
HS_FDS_PROC_IS_FUNCT=TRUE
HS_FDS_RESULTSET_SUPPORT=TRUE
Have tried the other set of params:
HS_FDS_PROC_IS_FUNCT=FALSE
HS_FDS_RESULTSET_SUPPORT=TRUE
Same story. After changing the init*** file have bounced both Listeners (DB and Gateway), reconnected, and re-run the query.
Have I missed something?
Any help is greatly appreciated!Sorry, but for me it looks you did not get the problem.
Oracle® Database Gateway for SQL Server User's Guide,:
11g Release 2 (11.2)
Part Number E12069-02
*2 SQL Server Gateway Features and Restriction*
Result Sets and Stored Procedures
The Oracle Database Gateway for SQL Server provides support for stored procedures which return result sets.
By default, all stored procedures and functions do not return a result set to the user. To enable result sets, set the HS_FDS_RESULTSET_SUPPORT parameter value to TRUE.
PL/SQL Program Fetching from Result Sets in Sequential Mode
-- Execute procedure
out_arg := null;
refcurproc@MSQL('Hello World', out_arg, rc1);
Somewhere in this forum I've seen a message that the syntax "SELECT ... FROM sp@db(param1, param2)" works.
Anyway, even with the PL/SQL block the error message is the same - ORA-00933 "SQL command not properly ended"
and the cursor (* in SQL*PLUS) is put just at the first bracket.
Edited by: user636213 on Aug 10, 2012 5:17 AM -
Table values are not passed to context.
Hi,
Im trying to build a online(WD ABAP) form with a BAPI which is having import, export and tables parameters.
I have binded Datasource with main node and Template source I have binded with the form which I created with the form interface using context fromBAPI. Then I got all importing and exporting parameters. And tables I have got under Changing Node of Data View in the form. But there is one other node called Data is created automatically under changing node and under that DATA node I got Table and under that I got again Data Node created and there I have all the attributes.
I dragged table on to the form. But when Im tesing the form by submitting, Im not getting table values which are entered in table. But all other values which are binded to importing parameters Im able to get the values.
Can some one tell me how to get the values from table to context? And can I have dynamic table in the form to map to the context so that I can update the data through BAPI?
Warm Regards,
J.Smitha.Hi,
Smitha, you can defnetly use dynamic table in interactive form. I had similar problem and I acheived like follwing:Basically you have to bind the table .
If you want to have fixed number of rows in the interactive form, then in wddoinit method bind the internal table to ur table node. for exmaple: if u want 2 rows in the form loop times . So by default when you open the form you will get two rows for the table.
**************BIND THE ITAB ****************************
DO 2 TIMES.
APPEND LW_LFBK TO LT_LFBK.
CLEAR LW_LFBK.
ENDDO.
CALL METHOD lo_nd_t_lfbk->bind_table
EXPORTING
new_items = LT_LFBK.
If you want to have dynmic table then take a submit button in the form instead of normal button,
in onaction submit write a loop every time you click that new submit button it should add a new row.
use above coding in onactionsubmit instead of doinit.
Thats it.
Regards,
Ravi -
Oracle Table Storage Parameters - a nice reading
bold Gony's reading excercise for 07/09/2009 bold -
The below is from the web source http://www.praetoriate.com/t_%20tuning_storage_parameters.htm. Very good material.The notes refers to figures and diagrams which cannot be seen below. But the text below is ver useful.
Let’s begin this chapter by introducing the relationship between object storage parameters and performance. Poor object performance within Oracle is experienced in several areas:
Slow inserts Insert operations run slowly and have excessive I/O. This happens when blocks on the freelist only have room for a few rows before Oracle is forced to grab another free block.
Slow selects Select statements have excessive I/O because of chained rows. This occurs when rows “chain” and fragment onto several data blocks, causing additional I/O to fetch the blocks.
Slow updates Update statements run very slowly with double the amount of I/O. This happens when update operations expand a VARCHAR or BLOB column and Oracle is forced to chain the row contents onto additional data blocks.
Slow deletes Large delete statements can run slowly and cause segment header contention. This happens when rows are deleted and Oracle must relink the data block onto the freelist for the table.
As we see, the storage parameters for Oracle tables and indexes can have an important effect on the performance of the database. Let’s begin our discussion of object tuning by reviewing the common storage parameters that affect Oracle performance.
The pctfree Storage Parameter
The purpose of pctfree is to tell Oracle when to remove a block from the object’s freelist. Since the Oracle default is pctfree=10, blocks remain on the freelist while they are less than 90 percent full. As shown in Figure 10-5, once an insert makes the block grow beyond 90 percent full, it is removed from the freelist, leaving 10 percent of the block for row expansion. Furthermore, the data block will remain off the freelist even after the space drops below 90 percent. Only after subsequent delete operations cause the space to fall below the pctused threshold of 40 percent will Oracle put the block back onto the freelist.
Figure 10-83: The pctfree threshold
The pctused Storage Parameter
The pctused parameter tells Oracle when to add a previously full block onto the freelist. As rows are deleted from a table, the database blocks become eligible to accept new rows. This happens when the amount of space in a database block falls below pctused, and a freelist relink operation is triggered, as shown in Figure 10-6.
Figure 10-84: The pctused threshold
For example, with pctused=60, all database blocks that have less than 60 percent will be on the freelist, as well as other blocks that dropped below pctused and have not yet grown to pctfree. Once a block deletes a row and becomes less than 60 percent full, the block goes back on the freelist. When rows are deleted, data blocks become available when a block’s free space drops below the value of pctused for the table, and Oracle relinks the data block onto the freelist chain. As the table has rows inserted into it, it will grow until the space on the block exceeds the threshold pctfree, at which time the block is unlinked from the freelist.
The freelists Storage Parameter
The freelists parameter tells Oracle how many segment header blocks to create for a table or index. Multiple freelists are used to prevent segment header contention when several tasks compete to INSERT, UPDATE, or DELETE from the table. The freelists parameter should be set to the maximum number of concurrent update operations.
Prior to Oracle8i, you must reorganize the table to change the freelists storage parameter. In Oracle8i, you can dynamically add freelists to any table or index with the alter table command. In Oracle8i, adding a freelist reserves a new block in the table to hold the control structures. To use this feature, you must set the compatible parameter to 8.1.6 or greater.
The freelist groups Storage Parameter for OPS
The freelist groups parameter is used in Oracle Parallel Server (Real Application Clusters). When multiple instances access a table, separate freelist groups are allocated in the segment header. The freelist groups parameter should be set the number of instances that access the table. For details on segment internals with multiple freelist groups, see Chapter 13.
NOTE: The variables are called pctfree and pctused in the create table and alter table syntax, but they are called PCT_FREE and PCT_USED in the dba_tables view in the Oracle dictionary. The programmer responsible for this mix-up was promoted to senior vice president in recognition of his contribution to the complexity of the Oracle software.
Summary of Storage Parameter Rules
The following rules govern the settings for the storage parameters freelists, freelist groups, pctfree, and pctused. As you know, the value of pctused and pctfree can easily be changed at any time with the alter table command, and the observant DBA should be able to develop a methodology for deciding the optimal settings for these parameters. For now, accept these rules, and we will be discussing them in detail later in this chapter.
There is a direct trade-off between effective space utilization and high performance, and the table storage parameters control this trade-off:
For efficient space reuse A high value for pctused will effectively reuse space on data blocks, but at the expense of additional I/O. A high pctused means that relatively full blocks are placed on the freelist. Hence, these blocks will be able to accept only a few rows before becoming full again, leading to more I/O.
For high performance A low value for pctused means that Oracle will not place a data block onto the freelist until it is nearly empty. The block will be able to accept many rows until it becomes full, thereby reducing I/O at insert time. Remember that it is always faster for Oracle to extend into new blocks than to reuse existing blocks. It takes fewer resources for Oracle to extend a table than to manage freelists.
While we will go into the justification for these rules later in this chapter, let’s review the general guidelines for setting of object storage parameters:
Always set pctused to allow enough room to accept a new row. We never want to have a free block that does not have enough room to accept a row. If we do, this will cause a slowdown since Oracle will attempt to read five “dead” free blocks before extending the table to get an empty block.
The presence of chained rows in a table means that pctfree is too low or that db_block_size is too small. In most cases within Oracle, RAW and LONG RAW columns make huge rows that exceed the maximum block size for Oracle, making chained rows unavoidable.
If a table has simultaneous insert SQL processes, it needs to have simultaneous delete processes. Running a single purge job will place all of the free blocks on only one freelist, and none of the other freelists will contain any free blocks from the purge.
The freelist parameter should be set to the high-water mark of updates to a table. For example, if the customer table has up to 20 end users performing insert operations at any time, the customer table should have freelists=20.
The freelist groups parameter should be set the number of Real Application Clusters instances (Oracle Parallel Server in Oracle8i) that access the table.sb92075 wrote:
goni ,
Please let go of 20th century & join the rest or the world in the 21st century.
Information presented is obsoleted & can be ignored when using ASSM & ASSM is default with V10 & V11I said the same over here for the exactly same thread, not sure what the heck OP is upto?
Oracle Table Storage Parameters - a nice reading
regards
Aman.... -
Table-Valued Function not returning any results
ALTER FUNCTION [dbo].[fGetVendorInfo]
@VendorAddr char(30),
@RemitAddr char(100),
@PmntAddr char(100)
RETURNS
@VendorInfo TABLE
vengroup char(25),
vendnum char(9),
remit char(10),
payment char(10)
AS
BEGIN
insert into @VendorInfo (vengroup,vendnum)
select ks183, ks178
from hsi.keysetdata115
where ks184 like ltrim(@VendorAddr) + '%'
update @VendorInfo
set remit = r.remit
from
@VendorInfo ven
INNER JOIN
(Select ksd.ks188 as remit, ksd.ks183 as vengroup, ksd.ks178 as vendnum
from hsi.keysetdata117 ksd
inner join @VendorInfo ven
on ven.vengroup = ksd.ks183 and ven.vendnum = ksd.ks178
where ksd.ks192 like ltrim(@RemitAddr) + '%'
and ks189 = 'R') r
on ven.vengroup = r.vengroup and ven.vendnum = r.vendnum
update @VendorInfo
set payment = p.payment
from
@VendorInfo ven
INNER JOIN
(Select ksd.ks188 as payment, ksd.ks183 as vengroup, ksd.ks178 as vendnum
from hsi.keysetdata117 ksd
inner join @VendorInfo ven
on ven.vengroup = ksd.ks183 and ven.vendnum = ksd.ks178
where ksd.ks192 like ltrim(@PmntAddr) + '%'
and ks189 = 'P') p
on ven.vengroup = p.vengroup and ven.vendnum = p.vendnum
RETURN
END
GO
Hi all,
I'm having an issue where my Table-Valued Function is not returning any results.
When I break it out into a select statement (creating a table, and replacing the passed in parameters with the actual values) it works fine, but with passing in the same exact values (copy and pasted them) it just retuns an empty table.
The odd thing is I could have SWORN this worked on Friday, but not 100% sure.
The attached code is my function.
Here is how I'm calling it:
SELECT * from dbo.fGetVendorInfo('AUDIO DIGEST', '123 SESAME ST', 'TOP OF OAK MOUNTAIN')
I tried removing the "+ '%'" and passing it in, but it doesn't work.
Like I said if I break it out and run it as T-SQL, it works just fine.
Any assistance would be appreciated.Why did you use a proprietary user function instead of a VIEW? I know the answer is that your mindset does not use sets. You want procedural code. In fact, I see you use an “f-” prefix to mimic the old FORTRAN II convention for in-line functions!
Did you know that the old Sybase UPDATE.. FROM.. syntax does not work? It gives the wrong answers! Google it.
Your data element names make no sense. What is “KSD.ks188”?? Well, it is a “payment_<something>”, “KSD.ks183” is “vendor_group” and “KSD.ks178” is “vendor_nbr” in your magical world where names mean different things from table to table!
An SQL programmer might have a VIEW with the information, something like:
CREATE VIEW Vendor_Addresses
AS
SELECT vendor_group, vendor_nbr, vendor_addr, remit_addr, pmnt_addr
FROM ..
WHERE ..;
--CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
in Sets / Trees and Hierarchies in SQL -
Problem with passing table values to RFC
Hi all,
I am passing values in table in RFC. There are no import/export parameters in RFC. We are passing only tables.
There are two tables in the RFC I_Dept and I_Subdept. Initially RFC is executed for getting the Dept which works fine as for this there is no need to set in input table value. But to get sub department I need to set the dept in I_Dept and after executing RFC I should get values in table I_Subdept. The code is as below:
wdContext.nodeOutput_I_Dept().invalidate();
wdContext.nodeOutput_I_Subdept().invalidate();
Z_Bapi_Dept_Values_Input d_Input = new Z_Bapi_Dept_Values_Input();
wdContext.nodeZ_Bapi_Dept_Values_Input().bind(d_Input);
Zdept dept = new Zdept();
dept.setZname("Sales");
d_Input.addI_Dept(dept);
try
wdContext.nodeZ_Bapi_Dept_Values_Input().currentZ_Bapi_Dept_Values_InputElement().modelObject().execute();
wdContext.nodeZ_Bapi_Dept_Values_Input().nodeOutput().invalidate();
catch (WDDynamicRFCExecuteException e)
msgManager.reportException(e.toString(), true);
Is anything wrong in this code because even after executing the RFC the size of node I_Subdept() is zero. But the RFC works fine in the backend.
Regards,
JaydeepA typical misunderstanding when populating structured input data
via code is the following:
- You have bound a WD context node hierarchy to the model say
N1 > M1
->N2 > ->M2
where N1, N2 are WD Context nodes (N2 is child of N1) and M1, M2 are
model classes bound to the context nodes. Important: M1 has a relation
to M2 on the model side, means there is some method M1.setMyM2(M2)
(assuming the target role of the relation is called "MyM2").
- You create context elements for N1 and N2 which are bound to a model
class instances of M1 and M2 respectively.
Assuming that M1 is the "executable" model class (*_Input) and M2
represents an input structure needed, the M2-input will - using the
above approach - not be available on execution. Why? The relation on the
model side (MyM2) is not available if just maintaining it via the
context, i.e. context and model are not "in sync". As RFC execution is
done via the model the M2 input will not be available.
You best create complex/nested input structures on the model
side and then bind the top-level model object to the resp context node.
In the above sample this would be:
M1 m1ModelObject = new M1();
M2 m2ModelObject = new M2();
m1ModelObject.setMyM2(m2ModelObject);
Hope it helps!
Regards,
Sangeeta -
Add Table Valued Parameter to a stored procedure
I have a stored procedure that voids an invoice and puts the items back into inventory and makes available the payment that was used to apply to this invoice... I would like to be able to do all this for a number of invoices at a time using a table valued
parameter how would I be able to do it?
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
-- =============================================
-- Author: Debra
-- Create date: March 25, 2014
-- Description: Void an invoice.
-- =============================================
CREATE PROCEDURE AR_VOID
-- Add the parameters for the stored procedure here
@Invoice INT,
@InvType nvarchar(3)
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- Insert statements for procedure here
IF(@InvType = 'reg')
BEGIN
UPDATE INV SET ONHAND = ONHAND + ARD.Qty ,STAMPED = CASE WHEN INV.TYPE = 'CIG' THEN STAMPED + ARD.QTY ELSE 0 END, LASTDATE = CONVERT(DATE, GETDATE()) FROM ARD JOIN INV ON ARD.ITEM = INV.ITEM
WHERE ARD.INVOICE = @Invoice
MERGE INTO RECEIPTSH target
USING (SELECT JOURNAL, SUM(AMOUNT) AMOUNT FROM Applied
WHERE INVOICE = @Invoice GROUP BY JOURNAL) AS source
ON target.Journal = source.Journal
WHEN MATCHED THEN
UPDATE
SET Applied = Applied - SOURCE.Amount;
DELETE Applied WHERE INVOICE = @Invoice
END
ELSE
BEGIN
UPDATE INV SET ONHAND = ONHAND - ARD.Qty ,STAMPED = CASE WHEN INV.TYPE = 'CIG' THEN STAMPED - ARD.QTY ELSE 0 END, LASTDATE = CONVERT(DATE, GETDATE()) FROM ARD JOIN INV ON ARD.ITEM = INV.ITEM
WHERE ARD.INVOICE = @Invoice
MERGE INTO ARH target
USING (SELECT INVOICE, SUM(AMOUNT) AMOUNT FROM CREDITMEMO WHERE CINVOICE = @Invoice GROUP BY INVOICE) AS source
ON target.INVOICE = source.INVOICE
WHEN MATCHED THEN
UPDATE
SET [OPEN] = 'TRUE', CLOSEDATE = NULL, PAID = target.PAID - source.AMOUNT;
DELETE CREDITMEMO WHERE CINVOICE = @Invoice
END
UPDATE ARH SET SUBTOTAL = 0, TAXES = 0, PAID = 0, STATUS = 'VOD', [OPEN] = 'FALSE', CLOSEDATE = CONVERT(DATE,GETDATE()) WHERE INVOICE = @Invoice
UPDATE ARD SET QTY = 0, ARD.PRICE = 0 WHERE INVOICE = @Invoice
END
GO
Debra has a questionTry
CREATE TYPE InvoicesList AS TABLE (InvoiceID INT)
GO
- =============================================
-- Author: Debra
-- Create date: March 25, 2014
-- Description: Voids passed invoices.
-- =============================================
CREATE PROCEDURE AR_VOID
@InvoicesList InvoicesList READONLY,
@InvType nvarchar(3)
AS
BEGIN
SET NOCOUNT ON;
IF(@InvType = 'reg')
BEGIN
UPDATE INV SET ONHAND = ONHAND + ARD.Qty ,STAMPED = CASE WHEN INV.TYPE = 'CIG' THEN STAMPED + ARD.QTY ELSE 0 END, LASTDATE = CONVERT(DATE, GETDATE()) FROM ARD JOIN INV ON ARD.ITEM = INV.ITEM
WHERE ARD.INVOICE IN (SELECT InvoiceID FROM @InvoicesList)
MERGE INTO RECEIPTSH target
USING (SELECT JOURNAL, SUM(AMOUNT) AMOUNT FROM Applied
WHERE INVOICE IN (SELECT InvoiceID FROM @InvoicesList) GROUP BY JOURNAL) AS source
ON target.Journal = source.Journal
WHEN MATCHED THEN
UPDATE
SET Applied = Applied - SOURCE.Amount;
DELETE Applied WHERE INVOICE IN (Select InvoiceID FROM @InvoicesList)
END
ELSE
BEGIN
UPDATE INV SET ONHAND = ONHAND - ARD.Qty,STAMPED = CASE WHEN INV.TYPE = 'CIG' THEN STAMPED - ARD.QTY ELSE 0 END, LASTDATE = CONVERT(DATE, GETDATE()) FROM ARD JOIN INV ON ARD.ITEM = INV.ITEM
WHERE ARD.INVOICE IN (SELECT InvoiceID FROM @InvoicesList)
MERGE INTO ARH target
USING (SELECT INVOICE, SUM(AMOUNT) AMOUNT FROM CREDITMEMO WHERE CINVOICE IN (SELECT InvoiceID FROM @InvoicesList) GROUP BY INVOICE) AS source
ON target.INVOICE = source.INVOICE
WHEN MATCHED THEN
UPDATE
SET [OPEN] = 'TRUE', CLOSEDATE = NULL, PAID = target.PAID - source.AMOUNT;
DELETE CREDITMEMO WHERE CINVOICE IN (SELECT InvoiceID FROM @InvoicesList)
END
UPDATE ARH SET SUBTOTAL = 0, TAXES = 0, PAID = 0, STATUS = 'VOD', [OPEN] = 'FALSE', CLOSEDATE = CONVERT(DATE,GETDATE()) WHERE INVOICE IN (SELECT InvoiceID FROM @InvoicesList)
UPDATE ARD SET QTY = 0, ARD.PRICE = 0 WHERE INVOICE IN (SELECT InvoiceID FROM @InvoicesList)
END
GO
I didn't look too close into your code, so I just translated your code as is into TVP.
For every expert, there is an equal and opposite expert. - Becker's Law
My blog
My TechNet articles -
Passing internal tables as parameters to subroutines
Hi,
This was going to be a question but I just had it answered by someone. Hopefully, this piece of information is going to be useful to other people as well.
I had a subroutine in my code which looks like this.
form fr_sub_get_data USING uf_file
TABLES ct_int_log STRUCTURE zinterface_log.
endform.
I was told by someone at work to change it as follows: -
form fr_sub_get_data USING uf_file
CHANGING ct_int_log type ty_tab_int_log.
endform.
The reason is that when using the tables' clause to pass internal tables as parameters, a header line is automatically created in the subroutine which lasts for as long as the subroutine is being excecuted. Its considered to be a bad practise to use header lines (Work-Areas are preferable).
Another important point to remember is that the 'tables' clause can only be used to pass <b>standard</b> internal tables as parameters. It can not be used for internal tables of other types.
Cheers!HI
GOOD
GO THROUGH THIS LINK
http://www.abapforum.com/forum/viewtopic.php?t=1962&language=english
THANKS
MRUTYUN -
How to pass internal table values to parameter
hi,
how to pass internal table values to parameter in selection screen.if is it possible means please sent codeings.
thanks.
stalin.hi,
tables : mara.
data : begin of itab_mara occurs 0,
matnr like mara-matnr,
ernam like mara-ernam,
end of itab_mara.
selection-screen : begin of block blk1 with frame title text-001.
parameters : p_matnr like mara-matnr.
selection-screen : end of block blk1.
select matnr ernam from mara into corresponding fields of itab_mara
where matnr = p_matnr.
loop at itab_mara.
write :/ itab_mara-matnr,
itab_mara-ernam.
endloop.
<b><REMOVED BY MODERATOR></b>
Message was edited by:
Alvaro Tejada Galindo -
SQL Server Multiple JOINS with Table Value Function - query never ends
I have a query with 4 joins using a table value function to get the data and when I execute it the query never ends.
Issue Details
- Table value function
CREATE FUNCTION [dbo].[GetIndicator]
@indicator varchar(50),
@refDate datetime
RETURNS
TABLE
AS
RETURN
SELECT
T1.Id ,T1.ColINT_1, T1.ColNVARCHAR_1 collate DATABASE_DEFAULT as ColNVARCHAR_1 ,T1.ColNVARCHAR_2 ,T1.ColSMALLDATETIME_1, T1.ColDECIMAL_1, T1.ColDECIMAL_1
FROM TABLE2 T2
JOIN TABLE3 T3
ON T2.COLFKT3 = T3.Id
AND T3.ReferenceDate = @RefDate
AND T3.State != 'Deleted'
JOIN TABLE4 T4
ON T2.COLFKT4 = T4.Id AND T4.Name=@indicator
JOIN TABLE1 T1
ON T2.COLFKT1=T1.Id
- Query
DECLARE @RefDate datetime
SET @RefDate = '30 April 2014 23:59:59'
SELECT DISTINCT OTHERTABLE.Id As Id
FROM
GetIndicator('ID#1_0#INDICATOR_X',@RefDate) AS OTHERTABLE
JOIN GetIndicator('ID#1_0#INDICATOR_Y',@RefDate) AS YTABLE
ON OTHERTABLE.SomeId=YTABLE.SomeId
AND OTHERTABLE.DateOfEntry=YTABLE.DateOfEntry
JOIN GetIndicator('ID#1_0#INDICATOR_Z',@RefDate) AS ZTABLE
ON OTHERTABLE.SomeId=ZTABLE.SomeId
AND OTHERTABLE.DateOfEntry=ZTABLE.DateOfEntry
JOIN GetIndicator('ID#1_0#INDICATOR_W',@RefDate) AS WTABLE
ON OTHERTABLE.SomeId=WTABLE.SomeId
AND OTHERTABLE.DateOfEntry=WTABLE.DateOfEntry
JOIN GetIndicator('ID#1_0#INDICATOR_A',@RefDate) AS ATABLE
ON OTHERTABLE.SomeId=ATABLE.SomeId
AND OTHERTABLE.DateOfEntry=ATABLE.DateOfEntry
Other details:
- SQL server version: 2008 R2
- If I execute the table function code outside the query, with the same args, the execution time is less the 1s.
- Each table function call return between 250 and 500 rows.Hi,
Calling function in general is a costly query. And definitely joining with a function 5 times in not an efficient one.
1. You can populate the results for all parameters in a CTE or table variable or temporary table and join (instead of funtion) for different parameters
2. Looks like you want fetch the IDs falling to different indicators for the same @Refdate. You can try something like this
WITH CTE
AS
SELECT
T1.Id ,T1.ColINT_1, T1.ColNVARCHAR_1 collate DATABASE_DEFAULT as ColNVARCHAR_1 ,T1.ColNVARCHAR_2 ,T1.ColSMALLDATETIME_1, T1.ColDECIMAL_1, T1.ColDECIMAL_1, T4.Name
FROM TABLE2 T2
JOIN TABLE3 T3
ON T2.COLFKT3 = T3.Id
AND T3.ReferenceDate = @RefDate
AND T3.State != 'Deleted'
JOIN TABLE4 T4
ON T2.COLFKT4 = T4.Id AND T4.Name=@indicator
JOIN TABLE1 T1
ON T2.COLFKT1=T1.Id
SELECT * FROM CTE WHERE Name = 'ID#1_0#INDICATOR_X' AND Name = 'ID#1_0#INDICATOR_Y' AND Name = 'ID#1_0#INDICATOR_Z' AND Name = 'ID#1_0#INDICATOR_W' AND Name = 'ID#1_0#INDICATOR_A' AND ReferenceDate = @RefDate.
Or you can even simplify more depends on your requirement.
Regards,
Brindha.
Maybe you are looking for
-
Yesterday evening I accidentally dropped my iphone 5s in the sink while it was running. I immediately removed it and it started acting funny (sreen would turn on and off on it's own). It stopped doing that and was working fine. However, shortly af
-
I can no longer access vertical tabs
I love vertical tabs as I frequently navigate between 30 - 50 tabs. This is impossible with horizontal tabs. Please can you let me know why vertical tabs is no longer accessible. The Tree Style tabs doesn't work with Firefox 5. I can see no advantage
-
I am attempting to back up my entire computer w/ an LaCie external drive. I don't have "airport", so I guess time machine will not apply, ? work. ? I am theathered w/ firewire 800. I have formated the disk, and need to know how to copy - back up th
-
Problem in creating IR through BAPI_INCOMINGINVOICE_create BAPI
Hi, Some problem in creating IR through the BAPI BAPI_INCOMINGINVOICE_create. I am able to see the IR number after executing the BAPI but unable to see the IR number in the table. The IR number is not getting updated in the database table even though
-
Cannot invoke WebService from Excel
I've created simple web service based on java class using Jdeveloper's 9.0.5.2 wizard. Java class looks like this package test; public class HelloNameClass public HelloNameClass() public String sayHello(String name) return "Hello "+name; I've generat