Examples for DB Tables
Hi,
Can anybody give me examples(Table Names) for
1. Transparent Table
2. Pooled Tables
3. Cluster Tables
Is Cluster Tables used in FI Module?
Thanks
seshu
Not sure what you mean here. Transactions in SAP, are programs or sets of programs that allow the user to intferface with the database, meaning that the user enters data and the records are created in the database. This is a transaction.
A transaction code is a object which allows the user to access the underlying program. For examle you have a program and screen in a module pool program. The user needs to run this program. The transaction code is tied to the program and screen, then the user can simply call the transaction code from the box at the top of the gui.
You can also have transaction codes which fire report programs which have selection screens.
Regards,
Rich Heilman
Similar Messages
-
Hi Team,
Could i know some real time scenarios that where we will use alias tables and how....?
thanks,Hi,
Read here; https://santoshbidw.wordpress.com/category/obiee-10g/obiee-10g-rpd/alias-table/
You asked the same question already in another forum;
Alias table
Why ask the same question again?
Thanks,
Daan Bakboord
http://obibb.wordpress.com -
Anymore example of WebDynpro table to Excel format
Hi, experts!
These day I am interested in Webdynpro for java.
But I am not good at java
Who can give me easy example for WebDynpro table to Excel?
Thanks in advance!HI,
Please post this [WebDynpro For JAVA forum|Web Dynpro Java;. You have more chances there to get a quick reply.
regards
Senthivel -
Check average growth (by week for example) for the biggest tables
Hello friends,
I have the list of the biggest tables,
but i need to check average growth (by week for example) for the biggest tables.
Is it possible somehow ?
Thanks in advance,Hi Jordy,
Call DB02. Then go to Space -> Segments -> Detailed Analysis -> Select the tablename on the list by double click -> At the bottom table click on History -> Click on Weeks tab -> Find the value under Chg.Size column
Best regards,
Orkun Gedik -
Creating view to get first row for each table !!
I am having tables(more than 10) which are related using foreign key and primary key relationship.
Example:
Table1:
T1Prim T1Col1 T1Col2
Table2
T2For T2Prim T2Col1 T2Col2 T2Col3
(here T2For will have value same as T1Prim and in my design it has same column name i.e. T1Prim)
Table3
T3For T3Prim T3Col1 T3Col2 T3Col3
(here T3For will have value same as T2Prim)
and so on.
The data in the tables is like For table1 there will be one record, for table2 there will be one record and for table 3 there are more than one records.
Can i view either the first record for each of them or all records from each of them by writing the following view.
I have written a view like this:
Create or replace view test (T1Prim,T1Col1, T1Col2,T2Prim,T2Col1 T2Col2, T2Col3, T3Prim,T3Col1, T3Col2, T3Col3)
As
Select
Table1.T1Prim,
Table1.T1Col1,
Table1.T1Col2,
Table2.T2Prim,
Table2.T2Col1,
Table2.T2Col2,
Table2.T2Col3,
Table3.T3Prim,
Table3.T3Col1,
Table3.T3Col2,
Table3.T3Col3
From
Table1,
Table2,
Table3
where
Table1.Prim = Table2.For
and Table2.Prim = Table3.For
When i ran the select statement on the view I am not getting any data. Whereas there is data when select is ran on individual table.
Can someone please tell me where i am goofing.
Thanks in the anticipation that i will get some hint to solve this.
Eagerly waiting for reply.
Thanks !!I mean use a collection :
Collection Methods
A collection method is a built-in function or procedure that operates on collections and is called using dot notation. The methods EXISTS, COUNT, LIMIT, FIRST, LAST, PRIOR, NEXT, EXTEND, TRIM, and DELETE help generalize code, make collections easier to use, and make your applications easier to maintain.
EXISTS, COUNT, LIMIT, FIRST, LAST, PRIOR, and NEXT are functions, which appear as part of an expression. EXTEND, TRIM, and DELETE are procedures, which appear as a statement. EXISTS, PRIOR, NEXT, TRIM, EXTEND, and DELETE take integer parameters. EXISTS, PRIOR, NEXT, and DELETE can also take VARCHAR2 parameters for associative arrays with string keys. EXTEND and TRIM cannot be used with index-by tables.
For more information, see "Using Collection Methods".
Syntax
Text description of the illustration collection_method_call.gif
Keyword and Parameter Description
collection_name
This identifies an index-by table, nested table, or varray previously declared within the current scope.
COUNT
COUNT returns the number of elements that a collection currently contains, which is useful because the current size of a collection is not always known. You can use COUNT wherever an integer expression is allowed.
For varrays, COUNT always equals LAST. For nested tables, normally, COUNT equals LAST. But, if you delete elements from the middle of a nested table, COUNT is smaller than LAST.
DELETE
This procedure has three forms. DELETE removes all elements from a collection. DELETE(n) removes the nth element from an index-by table or nested table. If n is null, DELETE(n) does nothing. DELETE(m,n) removes all elements in the range m..n from an index-by table or nested table. If m is larger than n or if m or n is null, DELETE(m,n) does nothing.
EXISTS
EXISTS(n) returns TRUE if the nth element in a collection exists. Otherwise, EXISTS(n) returns FALSE. Mainly, you use EXISTS with DELETE to maintain sparse nested tables. You can also use EXISTS to avoid raising an exception when you reference a nonexistent element. When passed an out-of-range subscript, EXISTS returns FALSE instead of raising SUBSCRIPT_OUTSIDE_LIMIT.
EXTEND
This procedure has three forms. EXTEND appends one null element to a collection. EXTEND(n) appends n null elements to a collection. EXTEND(n,i) appends n copies of the ith element to a collection. EXTEND operates on the internal size of a collection. So, if EXTEND encounters deleted elements, it includes them in its tally. You cannot use EXTEND with index-by tables.
FIRST, LAST
FIRST and LAST return the first and last (smallest and largest) subscript values in a collection. The subscript values are usually integers, but can also be strings for associative arrays. If the collection is empty, FIRST and LAST return NULL. If the collection contains only one element, FIRST and LAST return the same subscript value.
For varrays, FIRST always returns 1 and LAST always equals COUNT. For nested tables, normally, LAST equals COUNT. But, if you delete elements from the middle of a nested table, LAST is larger than COUNT.
index
This is an expression that must yield (or convert implicitly to) an integer in most cases, or a string for an associative array declared with string keys.
LIMIT
For nested tables, which have no maximum size, LIMIT returns NULL. For varrays, LIMIT returns the maximum number of elements that a varray can contain (which you must specify in its type definition).
NEXT, PRIOR
PRIOR(n) returns the subscript that precedes index n in a collection. NEXT(n) returns the subscript that succeeds index n. If n has no predecessor, PRIOR(n) returns NULL. Likewise, if n has no successor, NEXT(n) returns NULL.
TRIM
This procedure has two forms. TRIM removes one element from the end of a collection. TRIM(n) removes n elements from the end of a collection. If n is greater than COUNT, TRIM(n) raises SUBSCRIPT_BEYOND_COUNT. You cannot use TRIM with index-by tables.
TRIM operates on the internal size of a collection. So, if TRIM encounters deleted elements, it includes them in its tally.
Usage Notes
You cannot use collection methods in a SQL statement. If you try, you get a compilation error.
Only EXISTS can be applied to atomically null collections. If you apply another method to such collections, PL/SQL raises COLLECTION_IS_NULL.
You can use PRIOR or NEXT to traverse collections indexed by any series of subscripts. For example, you can use PRIOR or NEXT to traverse a nested table from which some elements have been deleted.
EXTEND operates on the internal size of a collection, which includes deleted elements. You cannot use EXTEND to initialize an atomically null collection. Also, if you impose the NOT NULL constraint on a TABLE or VARRAY type, you cannot apply the first two forms of EXTEND to collections of that type.
If an element to be deleted does not exist, DELETE simply skips it; no exception is raised. Varrays are dense, so you cannot delete their individual elements.
PL/SQL keeps placeholders for deleted elements. So, you can replace a deleted element simply by assigning it a new value. However, PL/SQL does not keep placeholders for trimmed elements.
The amount of memory allocated to a nested table can increase or decrease dynamically. As you delete elements, memory is freed page by page. If you delete the entire table, all the memory is freed.
In general, do not depend on the interaction between TRIM and DELETE. It is better to treat nested tables like fixed-size arrays and use only DELETE, or to treat them like stacks and use only TRIM and EXTEND.
Within a subprogram, a collection parameter assumes the properties of the argument bound to it. So, you can apply methods FIRST, LAST, COUNT, and so on to such parameters. For varray parameters, the value of LIMIT is always derived from the parameter type definition, regardless of the parameter mode.
Examples
In the following example, you use NEXT to traverse a nested table from which some elements have been deleted:
i := courses.FIRST; -- get subscript of first element
WHILE i IS NOT NULL LOOP
-- do something with courses(i)
i := courses.NEXT(i); -- get subscript of next element
END LOOP;
In the following example, PL/SQL executes the assignment statement only if element i exists:
IF courses.EXISTS(i) THEN
courses(i) := new_course;
END IF;
The next example shows that you can use FIRST and LAST to specify the lower and upper bounds of a loop range provided each element in that range exists:
FOR i IN courses.FIRST..courses.LAST LOOP ...
In the following example, you delete elements 2 through 5 from a nested table:
courses.DELETE(2, 5);
In the final example, you use LIMIT to determine if you can add 20 more elements to varray projects:
IF (projects.COUNT + 20) < projects.LIMIT THEN
-- add 20 more elements
Related Topics
Collections, Functions, Procedures
http://download-west.oracle.com/docs/cd/B10501_01/appdev.920/a96624/13_elems7.htm#33054
Joel P�rez -
One trigger for Multiple tables
Hi all,
I want write a trigger for mutiple tables.
For Example,
In database schema,some user update one table.I want to capture what the table and capture old value and new value.
the above example for all insert and delete also.
Regards
FameHi, Fame,
Sorry, a trigger only works on one table, so you need a separate trigger on each separate table.
All of those triggers can call a common procedure.
If you'd like to give a more detailed description of what you want to do, using two or three tables, then someone can give you more detailed instructions on how to do it.
Always say which version of Oracle you're using. -
How to get row count(*) for each table that matches a pattern
I have the following query that returns all tables that match a pattern (tablename_ and then 4 digits). I also want to return the row counts for these tables.
Currently a single column is returned: tablename. I want to add the column RowCount.
DECLARE @SQLCommand nvarchar(4000)
DECLARE @TableName varchar(128)
SET @TableName = 'ods_TTstat_master' --<<<<<< change this to a table name
SET @SQLCommand = 'SELECT [name] as zhistTables FROM dbo.sysobjects WHERE name like ''%' + @TableName + '%'' and objectproperty(id,N''IsUserTable'')=1 ORDER BY name DESC'
EXEC sp_executesql @SQLCommandThe like operator requires a string operand.
http://msdn.microsoft.com/en-us/library/ms179859.aspx
Example:
DECLARE @Like varchar(50) = '%frame%';
SELECT * FROM Production.Product WHERE Name like @Like;
-- (79 row(s) affected)
For variable use, apply dynamic SQL:
http://www.sqlusa.com/bestpractices/datetimeconversion/
Rows count all tables:
http://www.sqlusa.com/bestpractices2005/alltablesrowcount/
Kalman Toth Database & OLAP Architect
SQL Server 2014 Design & Programming
New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012 -
Hi,
I need some advice on remote keys for lookup tables.
We have loaded lookup data from several client system into the MDM repository. Each of the client system can have diffferences in the lookup values. What we need to do is to enable the keymappings so that the syndicator would know which value belongs to which system.
The tricky part is. We haven't managed to send out the values based on the remote keys. We do <b></b>not<b></b> want to send the lookup tables themselves but the actually main table records. All lookup data should be checked at the point of the syndication and only the used lookup values that orginally came from one system should be send to that particular system. Otherwise they should be tag should be blank.
Is this the right approach to handle this demand or is there a different way to take care of this? What would be the right settings in the syndicator?
Help will be rewarded.
Thank you very much
best regards
NicolasHi Andreas,
that is correct. Let's take two examples:
1) regions
2) Sales Area data (qualified lookup data)
Both tables are filled and loaded directly from the R/3s. So you would already know which value belongs to which system.
The problem that I have is that we will not map the remote key from the main table because it will be blank for new created master data (Centralization scenario). Therefore we cannot map the remote key from the attached lookup tables, can we?
The remote key will only work for lookup tables if the remote key of the actual master data is mapped. Since we don't have the remote key (local customer ID form R/3) in MDM and since we do not create it at the point of the syndication... how would the SAP standard scenario would look like for that?
This is nothing extraordinary it's just a standard centralization scneario.
Please advice.
Thanks alot
best regards
Nicolas -
Scheduled Job to gather stats for multiple tables - Oracle 11.2.0.1.0
Hi,
My Oracle DB Version is:
BANNER Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
In our application, we have users uploading files resulting in insert of records into a table. file could contain records ranging from 10000 to 1 million records.
I have written a procedure to bulk insert these records into this table using limit clause. After the insert, i noticed my queries run slow against these tables if huge files are uploaded simultaneously. After gathering stats, the cost reduces and the queries executed faster.
We have 2 such tables which grow based on user file uploads. I would like to schedule a job to gather stats during a non peak hour apart from the nightly automated oracle job for these two tables.
Is there a better way to do this?
I plan to execute the below procedure as a scheduled job using DBMS_SCHEDULER.
--Procedure
create or replace
PROCEDURE p_manual_gather_table_stats AS
TYPE ttab
IS
TABLE OF VARCHAR2(30) INDEX BY PLS_INTEGER;
ltab ttab;
BEGIN
ltab(1) := 'TAB1';
ltab(2) := 'TAB2';
FOR i IN ltab.first .. ltab.last
LOOP
dbms_stats.gather_table_stats(ownname => USER, tabname => ltab(i) , estimate_percent => dbms_stats.auto_sample_size,
method_opt => 'for all indexed columns size auto', degree =>
dbms_stats.auto_degree ,CASCADE => TRUE );
END LOOP;
END p_manual_gather_table_stats;
--Scheduled Job
BEGIN
-- Job defined entirely by the CREATE JOB procedure.
DBMS_SCHEDULER.create_job ( job_name => 'MANUAL_GATHER_TABLE_STATS',
job_type => 'PLSQL_BLOCK',
job_action => 'BEGIN p_manual_gather_table_stats; END;',
start_date => SYSTIMESTAMP,
repeat_interval => 'FREQ=DAILY; BYHOUR=12;BYMINUTE=45;BYSECOND=0',
end_date => NULL,
enabled => TRUE,
comments => 'Job to manually gather stats for tables: TAB1,TAB2. Runs at 12:45 Daily.');
END;Thanks,
SomiyaThe question was, is there a better way, and you partly answered it.
Somiya, you have to be sure the queries have appropriate statistics when the queries are being run. In addition, if the queries are being run while data is being loaded, that is going to slow things down regardless, for several possible reasons, such as resource contention, inappropriate statistics, and having to maintain a read consistent view for each query.
The default collection job decides for each table based on changes it perceives in the data. You probably don't want the default collection job to deal with those tables. You probably do want to do what Dan suggested with the statistics. But it's hard to tell from your description. Is the data volume and distribution volatile? You surely want representative statistics available when each query is started. You may want to use all the plan stability features available to tell the optimizer to do the right thing (see for example http://jonathanlewis.wordpress.com/2011/01/12/fake-baselines/ ). You may want to just give up and use dynamic sampling, I don't know, entire books, blogs and papers have been written on the subject. It's sufficiently advanced technology to appear as magic. -
User Datastore for multiple tables and columns!
Hi,
I hop so much, someone can help me.
I've made a user datastore to index multiple columns of multiple tables.
Now, the Documentation of Oracle explains the idexing of one table.
I have multiple tables, which have all, the columns descr and tagnr. I want to make a query sth like this:
select table1.column, table2.columnd... where contains(indexed field,'gas within descr',1)>0
is it possible to index 4 seperate tables, without having a collective key? I dont want to make a Concatenated Datastore.
I have wrote this code.
But it doesn't work. It's been compiled fine. But I don't get any result with my queries.
create or replace
procedure My_Proc_Wide
Must be in ctxsys schema.
In a full-scale example, this would be a wrapper
for a proc in the user schema.
rid in rowid,
tlob in out NOCOPY clob /* NOCOPY instructs Oracle to pass
this argument as fast as possible */
is
v_descr varchar2(80);
v_tagnr varchar2(30);
v_descr_name constant varchar2(20) := 'descr';
v_descr_start_tag constant varchar2(20) := '<' || v_descr_name || '>';
v_descr_end_tag constant varchar2(20) := '</' || v_descr_name || '>';
v_tagnr_name constant varchar2(20) := 'tagnr';
v_tagnr_start_tag constant varchar2(20) := '<' || v_tagnr_name || '>';
v_tagnr_end_tag constant varchar2(20) := '</' || v_tagnr_name || '>';
v_buffer varchar2(4000);
v_length integer;
begin
/* verify the env which called this */
if Dbms_Lob.Istemporary ( tlob ) <> 1
then
raise_application_error ( -20000,
'"IN OUT" tlob isn''t temporary' );
end if;
/* the real logic */
/* first tabel to be indexed */
select t1.tagnr, t1.descr
into v_tagnr, v_descr
from tweb.pdp_positions t1
where t1.rowid = rid;
v_buffer := v_tagnr_start_tag ||
v_tagnr ||
v_tagnr_end_tag ||
v_descr_start_tag ||
v_descr ||
v_descr_end_tag;
v_length := length ( v_buffer );
Dbms_Lob.WriteAppend(tlob, length(v_buffer) + 1, v_buffer || ' ');
/* second table to be indexed */
select t2.tagnr, t2.descr
into v_tagnr, v_descr
from tweb.pdp_schema_equ t2
where t2.rowid = rid;
v_buffer := v_tagnr_start_tag ||
v_tagnr ||
v_tagnr_end_tag ||
v_descr_start_tag ||
v_descr ||
v_descr_end_tag;
v_length := length ( v_buffer );
Dbms_Lob.WriteAppend(tlob, length(v_buffer) + 1, v_buffer || ' ');
/*third table to be indexed */
select t3.tagnr, t3.descr
into v_tagnr, v_descr
from tweb.pdp_equipment t3
where t3.rowid = rid;
v_buffer := v_tagnr_start_tag ||
v_tagnr ||
v_tagnr_end_tag ||
v_descr_start_tag ||
v_descr ||
v_descr_end_tag;
v_length := length ( v_buffer );
Dbms_Lob.WriteAppend(tlob, length(v_buffer) + 1, v_buffer || ' ');
/* fourth table to be indexed */
select t4.tagnr, t4.descr
into v_tagnr, v_descr
from tweb.pdp_Projcode t4
where t4.rowid = rid;
v_buffer := v_tagnr_start_tag ||
v_tagnr ||
v_tagnr_end_tag ||
v_descr_start_tag ||
v_descr ||
v_descr_end_tag;
v_length := length ( v_buffer );
Dbms_Lob.WriteAppend(tlob, length(v_buffer) + 1, v_buffer || ' ');
end My_Proc_Wide;
what have I to do, to make this work?
Any Help would be appriciated!!
Kind Regards,
ArsinehArsineh,
I realise that it has been quite some time since you posted this question but I thought I'd reply just in case you never did manage to get your user datastore working.
The reason your procedure will not work is simple. A user datastore procedure accepts a rowid input parameter. The rowid is the ID of the row that Oracle Text is currently trying to index. In the example you have given, you are attempting to use the supplied rowid as the primary key for multiple tables, this will simply never work as the rowid's across multiple tables will never correspond.
The best way to achieve your goal is to create the index on a master table which contains the common primary keys for each of your four tables e.g.
MASTER_TABLE
COL:COMMON_KEY (NUMBER(n))
COL:USER_INDEX_COLUMN (VARCHAR2(1))
If you create the user datastore index on the MASTER_TABLE.USER_UNDEX_COLUMN column your stored proc simply needs to read the correct row from the MASTER_TABLE (SELECT t.common_key into v_CommonKey FROM master_table t WHERE t.rowid = rid) and issue subsequent queries to extract the relavant data from the t1..t4 tables using the common key e.g.
SELECT t1.tagnr, t1.descr into v_tagnr, v_descr FROM t1 WHERE t1.[PRIMARY_KEY_FIELD] = v_CommonKey;
SELECT t2.tagnr, t2.descr into v_tagnr, v_descr FROM t2 WHERE t2.[PRIMARY_KEY_FIELD] = v_CommonKey;
and so on...
Hope this helps
Dean -
Missing most detailed table for dimension tables
Hi ,
I am getting this following error
Business Model Core:
[nQSError: 15003] Missing most detailed table for dimension tables: [Dim - Customer,Dim - Account Hierarchy,Dim - Account Region Hierarchy,Fact - Fins - Period Days Count].
[nQSError: 15001] Could not load navigation space for subject area Core.
I got this error when I tried to configure # of Elapsed Days and # of Cumulative Elapsed Days by following way-
1. Using the Administration Tool, open OracleBIAnalyticsApps.rpd.
Configuration Steps for Controlling Your Data Set
Configuring Oracle Financial Analytics 5-51
The OracleBIAnalyticsApps.rpd file is located at:
ORACLE_INSTANCE\bifoundation\OracleBIServerComponent\coreapplication_
obisn\repository
2. In the Business Model and Mapping layer, go the logical table Fact - Fins - Period
Days Count.
3. Under Sources, select the Fact_W_DAY_D_PSFT logical table source.
4. Clear the Disabled option in the General tab and click OK.
5. Open the other two logical table sources, Fact_W_DAY_D_ORA and Fact_W_
DAY_D_PSFT, and select the Disabled option.
6. Add the "Fact - Fins - Period Days Count" and "Dim - Company" logical tables to
the Business Model Diagram. To do so, right-click the objects and select Business
Model Diagram, Selected Tables Only.
7. In the Business Model Diagram, create a new logical join from "Dim - Company"
to "Fact - Fins - Period Days Count." The direction of the foreign key should be
from the "Dim - Company" logical table to the "Fact - Fins - Period Days Count"
table. For example, on a (0,1):N cardinality join, "Dim - Company" will be on the
(0/1) side and "Fact - Fins - Period Days Count" will be on the N side.
8. Under the Fact - Fins - Period Days Count logical table, open the "# of Elapsed
Days" and "# of Cumulative Elapsed Days" metrics, one at a time.
9. Go to the Levels tab. For the Company dimension, the Logical Level is set to All.
Click the X button to remove it. Repeat until the Company dimension does not
have a Logical Level setting.
10. Make sure to check Global Consistency to ensure there are no errors, and then
save the RPD file.
Please help me to resolve.
Thanks,
SoumitroCould you let me know how you resolved this. I am facing the same.
-
CC&B 2.3.1 - Custom indexes for base tables
Hi,
We are seeing a couple of statements in the database that could improve its performance with new custom indexes on base tables. Questions are:
- can we create new indexes on base tables ?
- is there any recommendations about naming, characteristics and location for this indexes ?
- is there any additional step to do in CC&B in order to use the index (define metadata or ...) ?
Thanks.
Regards.Hi,
if it necessary You can crate custom index.
In this situation You should follow naming convention from Database Design Standards:
Indexes
Index names are composed of the following parts:
+[X][C/M/T]NNN[P/S]+
+• X – letter X is used as a leading character of all base index names prior to Version 2.0.0. Now the first character of product owner flag value should be used instead of letter X. For client specific implementation index in Oracle, use CM.+
+• C/M/T – The second character can be either C or M or T. C is used for control tables (Admin tables). M is for the master tables. T is reserved for the transaction tables.+
+• NNN – A three-digit number that uniquely identifies the table on which the index is defined.+
+• P/S/C – P indicates that this index is the primary key index. S is used for indexes other than primary keys. Use C to indicate a client specific implementation index in DB2 implementation.+
Some examples are:
+• XC001P0+
+• XT206S1+
+• XT206C2+
+• CM206S2+
Warning! Do not use index names in the application as the names can change due to unforeseeable reasons
There is no additional metadata information for indexes in CI_MD* tables - because change of indexes does not influence generated Java code.
Hope that helps.
Regards,
Bartlomiej -
BAPI step by step to connect JAVA for catsdb table
BAPI step by step to connect JAVA for catsdb table,
Points will be rewarded,
full points with example of catsdb table in bapi for JCO JAVA
Thank you,
Regards,
Jagrut BharatKumar ShuklaHi,
Check the thread..
https://forums.sdn.sap.com/click.jspa?searchID=3587428&messageID=3647918
Regards,
Omkar. -
Hi Experts,
I need to implement F4 help for ALV table field.
I my scenario, I am using two views. If we click on any record in fist view then it displays the popup window (second view) with relevant record details.
Here one of the columns having fieldnames corresponding values (old values), for correcting old values I have created new value editable column, we can enter new value for old value then we can save it. Till this functionality is ok.
Then I have included OVS help for new value field. Here I need to get f4 help for newvalue field relevant to fieldname.
For example: user clicks on f4 in cell (new value) then if corresponding fieldname is u2018WERKSu2019 then it shows the plant values
Here I can get fieldname, domain name and value table using method set_attribute ().
Same concept I have implemented in ALV using F4IF_FIELD_VALUE_REQUEST. It is working fine
Here I have little bit confusion. Please advise me how to implement in OVS.
Regards,
BBCHi,
you'll have to create a method for the OVS search help (define it in the "OVS component usage" field in the context).
Sample code (should work for WERKS):
method on_ovs .
declare data structures for the fields to be displayed and
for the table columns of the selection list, if necessary
types:
begin of lty_stru_input,
add fields for the display of your search input here
WERKS TYPE WERKS,
end of lty_stru_input.
types:
begin of lty_stru_list,
add fields for the selection list here
WERKS TYPE WERKS_D,
NAME1 type NAME1,
end of lty_stru_list.
data: ls_search_input type lty_stru_input,
lt_select_list type standard table of lty_stru_list,
ls_text type wdr_name_value,
lt_label_texts type wdr_name_value_list,
lt_column_texts type wdr_name_value_list,
lv_window_title type string,
lv_group_header type string,
lv_table_header type string,
lv_werks type werks_d.
field-symbols: <ls_query_params> type lty_stru_input,
<ls_selection> type lty_stru_list.
case ovs_callback_object->phase_indicator.
when if_wd_ovs=>co_phase_0. "configuration phase, may be omitted
in this phase you have the possibility to define the texts,
if you do not want to use the defaults (DDIC-texts)
Set label from Medium Description to something more logical...
ls_text-name = `NAME1`. "must match a field in list structure
ls_text-value = `Plant description`.
insert ls_text into table lt_label_texts.
Set col header from Medium Description to something more logical...
ls_text-name = `NAME1`. "must match a field in list structure
ls_text-value = `Plant description`.
insert ls_text into table lt_column_texts.
lv_window_title = wd_assist->get_text( `003` ).
lv_group_header = wd_assist->get_text( `004` ).
lv_table_header = wd_assist->get_text( `005` ).
ovs_callback_object->set_configuration(
label_texts = lt_label_texts
column_texts = lt_column_texts
group_header = lv_group_header
window_title = lv_window_title
table_header = lv_table_header
col_count = 2
row_count = 20 ).
when if_wd_ovs=>co_phase_1. "set search structure and defaults
In this phase you can set the structure and default values
of the search structure. If this phase is omitted, the search
fields will not be displayed, but the selection table is
displayed directly.
Read values of the original context (not necessary, but you
may set these as the defaults). A reference to the context
element is available in the callback object.
ovs_callback_object->context_element->get_static_attributes(
importing static_attributes = ls_search_input ).
pass the values to the OVS component
ovs_callback_object->set_input_structure(
input = ls_search_input ).
when if_wd_ovs=>co_phase_2.
If phase 1 is implemented, use the field input for the
selection of the table.
If phase 1 is omitted, use values from your own context.
if ovs_callback_object->query_parameters is not bound.
endif.
assign ovs_callback_object->query_parameters->*
to <ls_query_params>.
if not <ls_query_params> is assigned.
TODO exception handling
endif.
call method ovs_callback_object->context_element->get_attribute
exporting
name = 'WERKS'
importing
value = lv_werks.
data: lv_subcat_text type rstxtmd.
select werks
name1
into table lt_select_list
from T001W.
ovs_callback_object->set_output_table( output = lt_select_list ).
when if_wd_ovs=>co_phase_3.
apply result
if ovs_callback_object->selection is not bound.
endif.
assign ovs_callback_object->selection->* to <ls_selection>.
if <ls_selection> is assigned.
ovs_callback_object->context_element->set_attribute(
name = `WERKS`
value = <ls_selection>-werks ).
endif.
endcase.
endmethod. -
How to validate data entered in table maintenance for Z table?
Hi,
I created a Z-table with table maintenance. I'd like to perform some validation on the entered data.
I know there are events for these : "If this pre-defined time is reached in extended table maintenance, the FORM routine specified for the current view and for this time is processed. This is useful, for example, for performing consistency checks before saving or specific actions when creating new entries."
I also found some info in the Online help:
http://help.sap.com/saphelp_47x200/helpdata/en/91/ca9f0ea9d111d1a5690000e82deaaa/frameset.htm
However it's not clear which event I can use for validation.
I tried event 01, however when I added a message, in the SM30 in case of message, I got the SM30 initial screen.
Do you have any example about validation?
Thanks in advance,
PeterHi,
Once you are on the table maintenance generator screen.
GOTO --> Enviornment --> Modification --> Events.
Here specify Event as '01' and the Subroutine name that will hold the data for the validation.
As you know we need to specify a function group.
GOTO SE80 and Open your function group.
Now in the PBO of the screen write a subroutine for the validation before saving an entry in the table.
Refer the code below for validation.
*& Form F9000_CHECK_BEFORE_SAVE
Subroutine called dynamically to check values before saving
FORM f9000_check_before_save.
TYPES : BEGIN OF ty_flmt,
zz_flmt_type TYPE zz_flmt_type,
zz_gsm_flmt_code TYPE zz_flmt_code,
END OF ty_flmt.
Internal Table
DATA : lit_flmt_code TYPE TABLE OF ty_flmt,
wa_flmt_code LIKE LINE OF lit_flmt_code.
DATA: lv_subrc TYPE sy-subrc VALUE '0',
lv_tabix TYPE sy-tabix,
lv_total_rec TYPE i,
lv_rec TYPE i,
flg_upd TYPE flag.
DESCRIBE TABLE total LINES lv_total_rec.
LOOP AT total.
lv_tabix = sy-tabix.
READ TABLE extract WITH KEY total.
IF sy-subrc EQ 0.
IF extract+3(10) IS INITIAL.
DELETE total.
DELETE extract INDEX sy-tabix.
DELETE extract INDEX lv_tabix.
lv_subrc = '4'.
flg_upd = 'X'.
MESSAGE s119(zcrm_appl) DISPLAY LIKE 'S'.
SET SCREEN 0.
ENDIF.
ENDIF.
wa_flmt_code-zz_flmt_type = total+13(3).
wa_flmt_code-zz_gsm_flmt_code = total+16(10).
APPEND wa_flmt_code TO lit_flmt_code.
ENDLOOP.
IF flg_upd IS INITIAL.
SORT lit_flmt_code BY zz_flmt_type zz_gsm_flmt_code.
DELETE ADJACENT DUPLICATES FROM lit_flmt_code.
DESCRIBE TABLE lit_flmt_code LINES lv_rec.
IF lv_total_rec <> lv_rec.
LOOP AT extract.
READ TABLE total WITH KEY extract.
IF sy-subrc EQ 0.
DELETE total INDEX sy-tabix.
DELETE extract INDEX 1.
lv_subrc = '4'.
MESSAGE s289(zcrm_appl) DISPLAY LIKE 'S'.
SET SCREEN 0.
ENDIF.
ENDLOOP.
ENDIF.
ENDIF.
sy-subrc = lv_subrc.
ENDFORM.
<b>Please reward points and close the thread.</b>
Regards,
Amit Mishra
Maybe you are looking for
-
Color Laserjet 4500 on Windows 7 - 64 bit Share with XP - 32 bit Need Additional 32-bit Drivers
Here's the setup: Desktop: Windows 7 Professional - 64 bit with Color Laserjet 4500, shared, connected and working fine. The name of the driver is HP Color LaserJet 4500 PCL 5. Laptop: Windows XP Professional - 32 bit. No network problems. Laptop see
-
At the time of VF01 only header item is coming in case of Sales BOM
Dear Experts, I have created Sales BOM with LUMF item category, now i have created Sales order where Header level material Item Category is TAQ & sub item category is TAE, Price is at header level only all sub item are free. Now when i am doing VF01
-
We currently have a windows environment at work and use 802.1x authentication and have some 1st generation apple TVs. We would like to get them on our wireless but are having issues. I would like to push profiles to these apple TVs to see if I can ge
-
Hi All I want to use standard IDOC DELVRY03, how can I test this IDOC on we19, it asks for IDOC number. Can any body expalin me how can I use this standard IDOC. Actually for T-code VL03n when th Post goods issue is done, I want this IDOC to be gener
-
I no longer use FCE and I am looking to sell it. The product has been registered in my name. Is there any way to unregister the software so a potential purchaser can personalise the software. Aside from removing the software from my machine what othe