Cl_gui_alv_grid:exporting protected table data MT_OUTTAB to local int.table
Hi guys,
this is my problem.
I've an instance of cl_gui_alv_grid in the reference variable gos_alv. This instance
is populated by "sap GOS service".
Now, i'd like to save locally the internal data table of the gos_alv (populated
automatically by the service call).
The data tables is called "mt_outtab" and is a standard protected component of the cl_gui_alv_grid class
(you can see it in the se24 transaction).
I think to create a subclass of the cl_gui_lav_grid and make a public method to
save data like this :
field-symbols: <outtab> type standard table.
* CLASS lcl_gui_alv_grid DEFINITION
class lcl_gui_alv_grid definition inheriting from cl_gui_alv_grid.
public section.
methods : get_tab_line.
endclass. "lcl_gui_alv_grid DEFINITION
* CLASS lcl_gui_alv_grid IMPLEMENTATION
class lcl_gui_alv_grid implementation.
method get_tab_line.
* mt_outtab is the data table held as a protected attribute
* in class cl_gui_alv_grid.
assign me->mt_outtab->* to <outtab>. "Original data
endmethod. "get_tab_line
endclass. "lcl_gui_alv_grid IMPLEMENTATION
data : l_alv type ref to lcl_gui_alv_grid.
But when i do a downcast like this :
l_alv ?= gos_alv.
I have an exception of casting error. SO, how i can extract protected data into
local internal table?
I've already try to debug the service that build the alv list to understand how the data table
is populated but it's too complex.
Thank you
Andrea
Hi,
not yet any suggestion?
Generally speaking, i cannot understand why i cannot use down-cast with an instance declared as a subclass of a super-class.
Many examples, shows that i can create a sub-class inheriting from a super-class, add methods and attributes, make something like this : subclass ?= superclass and the call method of the subclass.
But when i try to do the same with a class derived from cl_gui_alv_grid, i have always a casting type exception in the down-cast instruction.
Could you explain me why?
Thank you very much
Andrea
Similar Messages
-
How to display table data without using ALV and table element.
Hi,
Its possible to display table data without using ALV and table element.
Every time i am fetching data based on (customer,status) fields and displaying these data in my output using alv
(every time i am fetching single row data ),
But problem is alv occupying more space in the output , i want to display data part only i dont want field names,
settings and header data etc..things.
Give solution to display data part..
Regards,
Rakhi.Hi,
Does you mean that you need ALV without default Function Toolbar...? If this is the case, the easy solution would have been to use Table Element rather. But, if you need to use ALV only without Function Toolbar, you can do away with that as well.
In that case, after calling GET_MODEL, you need to add few more lines of codes to achieve your goal. Those lines are --
DATA LV_VALUE TYPE REF TO CL_SALV_WD_CONFIG_TABLE.
LV_VALUE = LO_INTERFACECONTROLLER->GET_MODEL(
* Standard Filter Function setting to FALSE
LV_VALUE->IF_SALV_WD_STD_FUNCTIONS~SET_SORT_COMPLEX_ALLOWED( ABAP_FALSE ).
LV_VALUE->IF_SALV_WD_STD_FUNCTIONS~SET_FILTER_COMPLEX_ALLOWED( ABAP_FALSE ).
LV_VALUE->IF_SALV_WD_STD_FUNCTIONS~SET_FILTER_FILTERLINE_ALLOWED( ABAP_FALSE ).
LV_VALUE->IF_SALV_WD_STD_FUNCTIONS~SET_DISPLAY_SETTINGS_ALLOWED( ABAP_FALSE ).
LV_VALUE->IF_SALV_WD_STD_FUNCTIONS~SET_VIEW_LIST_ALLOWED( ABAP_FALSE ).
LV_VALUE->IF_SALV_WD_STD_FUNCTIONS~SET_SORT_HEADERCLICK_ALLOWED( ABAP_FALSE ).
LV_VALUE->IF_SALV_WD_STD_FUNCTIONS~SET_HIERARCHY_ALLOWED( ABAP_FALSE ).
* Standard Filter Function setting to FALSE Ends
Here as you can easily notice that LV_VALUE is instantiated on CL_SALV_WD_CONFIG_TABLE. Now, using this LV_VALUE, you set standard functions as False to dis-allow their display.
Hope this answers your query.
Thanks.
Kumar Saurav. -
BUG: Export DDL and Data fails for mixed case table/column names
Hi there,
I have found a bug in SQL Developer. See details below.
Description:
When "Export DDL and Data) function is used on a table/columns not named in UPPERCASE, sql generated by SQL Developer is invalid.
Steps to reproduce:
- open SQL Developer, connect to DB
- make a table named "lowerCase" (in double quotes, so it won't be automatically changed to capital letters)
- you may also add some columns, for example "lowerCol1", "UpCol2", ALLUPCOL3
- add some data rows to the table
- choose Tools -> Export DDL and Data
- check exporting of tables and data, on "filter" tabs choose your "lowerCase" table
- press "Apply"
Error:
Generated SQL contains invalid INSERTs: mixed-case table and columns are referenced without obligatory double quotes, which yields an error when generated script is executed (see below, relevant line is underlined)
-- DDL for Table lowerCase
CREATE TABLE "DBO_HT"."lowerCase"
( "lowerCol1" VARCHAR2(100),
"UpCol2" VARCHAR2(100),
"ALLUPCOL3" VARCHAR2(100)
-- DATA FOR TABLE lowerCase
-- FILTER = none used
-- INSERTING into lowerCase
Insert into lowerCase (lowerCol1,UpCol2,ALLUPCOL3) values ('lc','uc','auc');
-- END DATA FOR TABLE lowerCase
Remarks
SQL Developer: version 1.2.1, build MAIN-32.13
Oracle DBs: 9.2 & Express
OS: Windows 2000 Professional
If you need any more details/testing, let me know. I'd really appreciate a quick patch for this issue...
Alternatively, do you know of any other simple way of copying a single database (it's called a schema in Oracle, right?) from one computer to another? Possibly something so simple like detaching->copying->reattaching mdf (data) files in SQL Server... I thought that this "Export DDL&Data" function will do, but as you can see I couldn't use it.
I just need a simple solution that works - one operation on source to stuff, get the resulting files to other computer and one operation to have it running there... I think that such scenario is very basic, yet I just can't achieve it and I am simply not allowed to spend more time on it (read: our test project fails, my company rejects my "lobbying" and stays with MSSQL :/ )
Thanks a lot & byeThanks for your reply.
ad. 1)
You're right. I just wanted to give some very short feedback on my experiences with SQL Developer, so I didn't think starting new threads would be necessary, but as I was writing it became much bigger than I initially planned - sorry about that. I will make proper threads as soon as possible. Having "Edit post" button on this forum would also be useful.
ad. 2)
Generally, you're right - in most cases it's true that "switching DBMS is a major commitment" and "you will produce terrible code" if you don't learn the new one.
However, I think that you miss one part of market here - the market that I think Express is also targeted on. I'd call it a "fire&forget databases" market; MySQL comes to mind as possibly most common solution here. It's the rather small systems, possibly web-accessed, whose data-throughput requirements are rather modest; the point is to store data at all, and not necesarily in fastest way, because given the amount of data that is used, even on low-end hardware it will work well enough. What's important here is its general ease of use - how easy is to set up such system, connect and access data, develop a software using it, how much maintenance is needed, how easy this maintenance is, how easy are the most common development tasks as creating a DB, moving a DB from test to production server etc. There, "how easy" directly translates to "how much time we need to set it up", which translates to "how much will the development will cost".
Considering the current technology, switching the DBMS in such systems is not necesarily a major commitment and believe me that you will not produce terrible code. In many cases it's as simple as changing a switch in your ORM toolkit: hibernate.dialect = Hibernate.Dialect.OracleDialect vs MySQLDialect vs MsSql2005Dialect
Therefore, in some part of market it's easy to switch DBMS, even on project-by-project basis. The reason to switch will appear when other DBMS makes life easier => development faster. From that point of view, I can understand my colleagues giving me an embarassing look and saying "come on, I won't read all these docs just to have db copied to test server". And it doesn't mean "they are not willing to learn anything new", it's just that they feel such basic task should have self-explaining solution that doesn't require mastering any special knowledge. And if they get such simple solutions somewhere else, it costs them nothing to change the hibernate dialect.
I think Oracle did the great job with introducing the Express to this "fire&forget" market. The installation is a snap, it just works out of the box, nothing serious to configure, opposite to what I remember from installing and working on Oracle 9 a few years ago. In some places it's still "you need to start SQL*Plus and enter this script", but it's definitely less than before. I also find the SQL Developer a great tool, it can do most of what we need to do with the DB, it's also much better and pleasant to use over Oracle 9 tools. Still, a few basic things still require too much hassle, and I'd say taking your schema to another machine is one of them. So I think that, if you do it well, the "schema copy wizard" you mentioned might be very helpful. If I was to give any general advice for Express line of DB/tools, I'd say "make things simple" - make it "a DB you can't see".
That's, IMHO, the way to attract more Express users. -
Unable to edit table data, but not for all tables
I have multiple tables in a schema. For some tables, I am able make edits to table data directly, i.e., context menu Table | Open, and the Data tab. When I am able to edit, I do get a pencil icon inside the cell I am editing/typing (and am able to commit the changes). When I am not able to edit, it does nothing (no error messages, sound, or visual cue). I thought it had to do with who owns the table object, but I log in as the same owner of the affected table objects.
Any pointers would be greatly appreciated so I am equipped when asking the DBA.
Thanks,
OS: Windows XP Professional SP2
Java(TM) Platform: 1.6.0_11
Oracle IDE: 2.1.1.64.45
Versioning Support: 2.1.1.64.45
Edited by: New2OWB10gR2 on Jun 23, 2010 12:20 PMHello again,
Here you are the DDL of the offending table:
CREATE TABLE "DBADMEX"."T50SEC82"
"COD_EMPRESA" CHAR(4 BYTE) DEFAULT ' ' NOT NULL ENABLE,
"COD_EMPR_CONT" CHAR(4 BYTE) DEFAULT ' ' NOT NULL ENABLE,
"COD_SECT_CONT" CHAR(2 BYTE) DEFAULT ' ' NOT NULL ENABLE,
"NUM_CUEN_CONT" CHAR(18 BYTE) DEFAULT ' ' NOT NULL ENABLE,
"COD_PAIS" CHAR(4 BYTE) DEFAULT ' ' NOT NULL ENABLE,
"COD_SECTOR" CHAR(6 BYTE) DEFAULT ' ' NOT NULL ENABLE
PCTFREE 10 PCTUSED 40 INITRANS 50 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE
INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT
TABLESPACE "TS_50" ;
CREATE UNIQUE INDEX "DBADMEX"."I5000082" ON "DBADMEX"."T50SEC82"
"COD_EMPRESA", "COD_EMPR_CONT", "COD_SECT_CONT", "NUM_CUEN_CONT"
PCTFREE 10 INITRANS 50 MAXTRANS 255 COMPUTE STATISTICS STORAGE
INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT
TABLESPACE "TS_50" ;
We are using the following versions:
Oracle database: 11.1.0.7.0
Oracle Client: 11.2.0.1.0
Windows (where the client runs): XP SP3 (version 5.1 Build 2600_spsp_sp3_gdr.080814-1236) in spanish.
SQL Developer: 2.1.1.64 (MAIN-64.45)
I think I haven't forgotten anything.
Thanks in advance for your help! -
How to display a table data on Screen having a Table control
Hi ,
I am new to ABAP.I would like to display a table data (Eg: ZDemo) on a screen at run time.I have defined a Table control in screen. Now I want to populate data from ZDemo to table control.How can I do that?Please help moving forward in this regard.Hi Gayatri,
After creating table control do the following steps.
1. In the flow logic section write the following code:
PROCESS BEFORE OUTPUT.
MODULE STATUS_0200.
LOOP AT I_LIKP WITH CONTROL LIKP_DATA CURSOR LIKP_DATA-CURRENT_LINE.
MODULE ASSIGN_DATA.
ENDLOOP.
PROCESS AFTER INPUT.
MODULE USER_COMMAND_0200.
LOOP AT I_LIKP.
ENDLOOP.
I_LIKP is the internal table which is used to display table data in the table control.
2. In Process Before Output, in the module STATUS_0200 write the following code:
DESCRIBE TABLE I_LIKP LINES FILL.
LIKP_DATA-LINES = FILL.
In Process After Input, in the module USER_COMMAND_0200 write the following code:
CASE SY-UCOMM.
WHEN 'LIPS'.
READ TABLE I_LIKP WITH KEY MARK = 'X'.
SELECT VBELN
POSNR
WERKS
LGORT
FROM LIPS
INTO TABLE I_LIPS
WHERE VBELN = I_LIKP-VBELN.
IF SY-SUBRC = 0.
CALL SCREEN 200.
ENDIF.
WHEN 'BACK'.
SET SCREEN 200.
ENDCASE.
In Process Before Output and in the module ASSIGN_DATA which is there inside the loop write the following code:
MOVE-CORRESPONDING I_LIKP TO LIKP.
So, Totally your flow logic code should be like this.
TABLES: LIKP, LIPS.
DATA: BEGIN OF I_LIKP OCCURS 0,
VBELN LIKE LIKP-VBELN,
ERNAM LIKE LIKP-ERNAM,
ERZET LIKE LIKP-ERZET,
ERDAT LIKE LIKP-ERDAT,
MARK TYPE C VALUE 'X',
END OF I_LIKP,
BEGIN OF I_LIPS OCCURS 0,
VBELN LIKE LIPS-VBELN,
POSNR LIKE LIPS-POSNR,
WERKS LIKE LIPS-WERKS,
LGORT LIKE LIPS-LGORT,
END OF I_LIPS,
FILL TYPE I.
CONTROLS: LIKP_DATA TYPE TABLEVIEW USING SCREEN 200,
LIPS_DATA TYPE TABLEVIEW USING SCREEN 300.
DATA: COLS LIKE LINE OF LIKP_DATA-COLS.
*& Module USER_COMMAND_0100 INPUT
text
MODULE USER_COMMAND_0100 INPUT.
CASE SY-UCOMM.
WHEN 'LIKP'.
SELECT VBELN
ERNAM
ERZET
ERDAT
FROM LIKP
INTO TABLE I_LIKP
WHERE VBELN = LIKP-VBELN.
IF I_LIKP[] IS INITIAL.
CALL SCREEN 200.
ENDIF.
WHEN 'EXIT'.
LEAVE PROGRAM.
ENDCASE.
ENDMODULE. " USER_COMMAND_0100 INPUT
*& Module assign_data OUTPUT
text
MODULE ASSIGN_DATA OUTPUT.
MOVE-CORRESPONDING I_LIKP TO LIKP.
ENDMODULE. " assign_data OUTPUT
*& Module STATUS_0200 OUTPUT
text
MODULE STATUS_0200 OUTPUT.
DESCRIBE TABLE I_LIKP LINES FILL.
LIKP_DATA-LINES = FILL.
ENDMODULE. " STATUS_0200 OUTPUT
*& Module USER_COMMAND_0200 INPUT
text
MODULE USER_COMMAND_0200 INPUT.
CASE SY-UCOMM.
WHEN 'LIPS'.
READ TABLE I_LIKP WITH KEY MARK = 'X'.
SELECT VBELN
POSNR
WERKS
LGORT
FROM LIPS
INTO TABLE I_LIPS
WHERE VBELN = I_LIKP-VBELN.
IF SY-SUBRC = 0.
CALL SCREEN 200.
ENDIF.
WHEN 'BACK'.
SET SCREEN 200.
ENDCASE.
ENDMODULE. " USER_COMMAND_0200 INPUT
Save and Activate the program along with the screen in which you have included table control.
Hope this will help you.
Regards
Haritha. -
Fact table data type changed from int to nvarchar -- rebuild on cube fails
I change one of my data types in the Fact table from INT to Nvarchar... When I look at the properties it appears
correct but when I try to Process it fails still thinking it's data type INT.
How can the DSV be refreshed( I did try REFRESH and it noticied the change to field) to reflect new nvarchar data type without
having to rebuild entire Cube.
SSAS 2005
Thanks.Did you try to view the code on the Data Source view and cube that it is reflecting the correct data type on both places? if not you can change there and save it.
prajwal kumar potula -
Merge multiple source table dates and to one target table
The requirement is to merge multiple source tables (each table has a set of start and end date) to one target table with one set of start and end date and contain the date relevant column values from each source table. Payment source tablestart dateend datepayemnt1/1/201512/31/2015301/1/201612/31/999960Position source tablestart dateend dateposition1/1/201512/31/2016101/1/201712/31/999920Target tablestart dateend datepayemntposition1/1/201512/31/201530101/1/201612/31/201660101/1/201712/31/99996020 What transformation(s) will be best to use to handle this requirement? Thanks, Lei
Thanks Karen,
that was exactly what i was hoping for.
Maybe it could be made easier/less confusing if the Mapping Workbench just made you choose a target table. But maybe this is not usefull if the table contains two foreign keys to the same table. Or maybe this should just be put somewhere in the documentation.
Regards,
Robert
Hi Donald,
fortunately i'm my own DBA so i don't have any problems ;-). However i'm certainly interested in the reasons for not having such a conditional foreign key.
However actually the foreign key isn't conditional, the condition is that either the field (using the FK) must be filled, or a free-format field. The reasoning for this is that we have a list of known towns and if the addres is local a town from that list must be chosen. If the addres is outside the country a town can just be typed in (no list).
Concerning the agrregate, all fields are always used. There are no neediness flags anywhere. The aggregate contains three fields which are mapping as direct (two fields) or a One-to-one (the FK). All 'parents' all contains these three fields.
Regards,
Robert -
To export and import oracle 11g table data only
Hi Gurus,
Just not sure of the procedure to follow in the export just the table data and then truncate the table do some changes(not table structure changes ) and then import the same table data in to the relevent table .
Could some please help me in the setps involved in it .
Thanks a Lot in advanceIf you can use Data Pump, here are your commands:
expdp table_owner/password directory=<your_directory> dumpfile=table_name.dmp tables=table_name content=data_only
impdp table_owner/password directory=<your_directory> dumpfile=table_name.dmp tables=table_name table_exists_action=append
Data Pump requires version 10.1 or later.
Dean -
Transfer temp table data to excel
there are temporary tables in db. (Sales) . it contains huge data with some columns in being Chinese character .
can in transfer data from temp table to excel through ssis..
example :
select * from #sales
o/p : should be moved to excel .You can't use a local temp table (whose name starts with #) as a OLEDB source directly at development time because the table will only exist at run time. The SSIS will give you an error similar to "Table/Object not found" at development time.
The workaround would be to use an actual physical table during the package development. A global temp table (whose name starts with ##) created in a different session e.g. SSMS, can also be used but the session that created the global
temp table has to be kept open during the development of the SSIS package. As you might be aware that the global temp tables are available till there is a connection using it.
After the development and testing is complete, stop the package execution and change the table name to a local temp table either in the package UI or the package XML and save the package.
<b>- Aalam</b> | <a href="http://aalamrangi.wordpress.com">(Blog)</a> -
Table Data Source Search Result gives ClassCastException
I set up a table data source and queried it using the following URL:
http://machine_name:port/ultrasearch/query/search.jsp?usearch.p_mode=Advanced
and specified my table data source. The result URLs
came up with the right primary key id. However when I
click the URL, I get:
java.lang.ClassCastException: com.evermind[Oracle9iAS (9.0.2.0.0) Containers for J2EE].sql.OrclCallableStatement
at oracle.ultrasearch.query.DisplayUtil.displayTableUrl(DisplayUtil.java:131)
at display.jspService(_display.java:1568) [SRC:/display.jsp:81]
at com.orionserver[Oracle9iAS (9.0.2.0.0) Containers for J2EE].http.OrionHttpJspPage.service(OrionHttpJspPage.java:56)
at oracle.jsp.runtimev2.JspPageTable.service(JspPageTable.java:302)
at oracle.jsp.runtimev2.JspServlet.internalService(JspServlet.java:407)
at oracle.jsp.runtimev2.JspServlet.service(JspServlet.java:330)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:336)
at com.evermind[Oracle9iAS (9.0.2.0.0) Containers for J2EE].server.http.ResourceFilterChain.doFilter(ResourceFilterChain.java:59)
at oracle.security.jazn.oc4j.JAZNFilter.doFilter(JAZNFilter.java:283)
at com.evermind[Oracle9iAS (9.0.2.0.0) Containers for J2EE].server.http.ServletRequestDispatcher.invoke(ServletRequestDispatcher.java:523)
at com.evermind[Oracle9iAS (9.0.2.0.0) Containers for J2EE].server.http.ServletRequestDispatcher.forwardInternal(ServletRequestDispatcher.java:269)
at com.evermind[Oracle9iAS (9.0.2.0.0) Containers for J2EE].server.http.HttpRequestHandler.processRequest(HttpRequestHandler.java:735)
at com.evermind[Oracle9iAS (9.0.2.0.0) Containers for J2EE].server.http.AJPRequestHandler.run(AJPRequestHandler.java:151)
at com.evermind[Oracle9iAS (9.0.2.0.0) Containers for J2EE].util.ThreadPoolThread.run(ThreadPoolThread.java:64)
I have specified NUMBER as the data type for my primary key column and it is of type NUMBER in my table DDL. Is that OK or could this be causing the problem?
DineshDinesh,
Can you provide the following information for creating the table data source:
- Is the table data source based on a table or a view?
- Is the table/view in the local or remote database?
- If the table is in the local database, is the table in the instance owner schema or another schema?
- Do you login to Ultra Search Admin Tool as the instance owner or other users?
- Does the instance owner schema have privileges to read the contents in the table/view? -
How to get common datas from two int.tables
hi,
please tell me , how to i will get the common datas between two int. tables
& place them in third int. table.
give me syntax.
regards
subhasis.Hi Subhasis,
<b>SORT :</b></u>
SORT itab.
Extras:
1. ... BY f1 f2 ... fn
2. ... ASCENDING
3. ... DESCENDING
4. ... AS TEXT
5. ... STABLE
The syntax check performed in an ABAP Objects context is stricter than in other ABAP areas. See Field symbols not allowed as sort criterion.
Effect
The entries in the internal table are sorted in ascending order using the key from the table definition (DATA, TYPES).
Addition 1
... BY f1 f2 ... fn
Effect
Uses the sort key defined by the sub-fields f1, f2, ..., fn of the table itab instead of the table key. The fields can be of any type; even number fields and tables are allowed.
You can also specify the sort fields dynamically in the form (name). If name is blank at runtime, the sort field is ignored. If itab is a table with a header line, you can also use a field symbol pointing to the header line of itab as a dynamic sort criterion. A field symbol that is not assigned is ignored. If a field symbol is assigned, but does not point to the header line of the internal table, a runtime error occurs.
If the line type of the internal table contains object reference variables as components, or the entire line type is a reference variable, you can use the attributes of the object to which a reference is pointing in a line as sort criteria (see Attributes of Objects as the Key of an Internal Table.
You can address the entire line of an internal table as the key using the pseudocomponent TABLE_LINE. This is particularly relevant for tables with a non-structured line type when you want to address the whole line as the key of the table (see also Pseudocomponent TABLE_LINE With Internal Tables).
If you use one of the additions 2 to 5 before BY, it applies to all fields of the sort key by default. You can also specify these additions after each individual sort field f1, f2, ..., fn. For each key field, this defines an individual sort rule which overrides the default.
Addition 2
... ASCENDING
Effect
Sorts in ascending order. This is also the default if no sort order is specified directly after SORT. For this reason, it is not necessary to specify ASCENDING explicitly as the default sort order.
With the addition BY, you can also specify ASCENDING directly after a sort field to define ascending order explicitly as the sort sequence for this field.
Addition 3
... DESCENDING
Effect
Sorts in descending order. If the addition comes right after SORT, DESCENDING is taken as the default for all fields of the sort key.
With the addition BY, you can also specify DESCENDING directly after a sort field.
Addition 4
... AS TEXT
Effect
Text fields are sorted appropriate to the locale. This means that the relative order of characters is defined according to the text environment being used.
When an internal mode is opened (in other words, when a roll area is opened), the text environment is automatically set to the logon language specified in the user master record. If necessary, however, you can change the text environment explicitly in your program by using a SET-LOCALE statement.
If the addition comes directly after itab, locale-specific rules are used for all fields of the sort key where the type of these fields is C or W. After the sort, the sequence of entries usually does not match the sequence which results otherwise, without using the addition AS TEXT, i.e. with binary sorting.
With the addition BY, you can also specify AS TEXT directly after a sort field, provided it is of type C or W, or a structured type. Otherwise, a runtime error occurs. In sort fields with a structured type, AS TEXT only affects subcomponents with type C or W.
In case of an invalid character, a SYSLOG message is written, and the respective record is inserted at the end.
Note
Please keep the rules for site-specific sorting in mind.
Example
Sort a name table with different keys:
TYPES: BEGIN OF PERSON_TYPE,
NAME(10) TYPE C,
AGE TYPE I,
COUNTRY(3) TYPE C,
END OF PERSON_TYPE.
DATA: PERSON TYPE STANDARD TABLE OF PERSON_TYPE WITH
NON-UNIQUE DEFAULT KEY INITIAL SIZE 5,
WA_PERSON TYPE PERSON_TYPE.
WA_PERSON-NAME = 'Muller'. WA_PERSON-AGE = 22.
WA_PERSON-COUNTRY = 'USA'.
APPEND WA_PERSON TO PERSON.
WA_PERSON-NAME = 'Moller'. WA_PERSON-AGE = 25.
WA_PERSON-COUNTRY = 'FRG'.
APPEND WA_PERSON TO PERSON.
WA_PERSON-NAME = 'Möller'. WA_PERSON-AGE = 22.
WA_PERSON-COUNTRY = 'USA'.
APPEND WA_PERSON TO PERSON.
WA_PERSON-NAME = 'Miller'. WA_PERSON-AGE = 23.
WA_PERSON-COUNTRY = 'USA'.
APPEND WA_PERSON TO PERSON.
SORT PERSON.
Now, the sequence of the table entries is as follows:
Miller 23 USA
Moller 25 FRG
Muller 22 USA
Möller 22 USA
If, for example, you apply German sort rules where the umlaut comes directly after the letter 'o' in the sort, the data record beginning with 'Möller' would not be in the right place in this sequence. It should come second.
Provided a German-language locale is set (e.g. sorting is according to German grammatical rules, see also SET LOCALE), you can sort the names according to German rules as follows:
SORT PERSON BY NAME AS TEXT.
Now, the sequence of table entries is as follows:
Miller 23 USA
Moller 25 FRG
Möller 22 USA
Muller 22 USA
Further examples:
SORT PERSON DESCENDING BY COUNTRY AGE NAME.
Now, the sequence of table entries is as follows:
Miller 23 USA
Möller 22 USA
Muller 22 USA
Moller 25 FRG
SORT PERSON DESCENDING BY AGE ASCENDING NAME AS TEXT.
Now, the sequence of table entries is as follows:
Muller 22 USA
Möller 22 USA
Miller 23 USA
Moller 25 FRG
Addition 5
... STABLE
Effect
Uses a stable sort, that is, the relative sequence of entries that have the same sort key remains unchanged.
Unlike additions 2 to 4, you cannot use this addition directly after a sort field.
Notes
General:
The number of sort fields is restricted to 250.
The sort process is only stable if you use the STABLE addition. Otherwise, a predefined sequence of fields used to sort a list is not usually retained.
It does not make sense to use the SORT command for a SORTED TABLE. If the table type is statically declared, the system returns a syntax error if you try to SORT the table. If the table type is not statically declared (for example, because the table was passed to a FORM routine as an INDEX TABLE in a parameter), and the system can interpret the SORT statement as an empty operation, it ignores the statement. This is the case when the key in the BY clause corresponds to the beginning of the table key. Otherwise, a runtime error occurs.
To delete all duplicate entries from a sorted internal table (e.g. just after SORT), you can use the DELETE ADJACENT DUPLICATES FROM itab statement.
When using the addition AS TEXT, the sequence of entries after the sort does not usually match the sequence resulting from a binary sort, i.e. if the addition AS TEXT is not specified. The consequence of this is that after the SORT, you are not allowed to access with the READ TABLE itab ... BINARY SEARCH statement.
If you still want to access data sorted apppropriate to the locale with a binary search, you can do this by including an additional component in the table where you can explictly store the data formatted using the CONVERT TEXT ... INTO SORTABLE CODE statement. This is also recommended for performance reasons if you have to re-sort the table several times according to locale-specific criteria.
If the internal table has more than 2^19 lines or is larger than 12 MB, the system sorts it physically using an external auxiliary file. You can specify the directory in which the file should be created using the SAP profile parameter DIR_SORTTMP. By default, the system uses the SAP data directory (SAP profile parameter DIR_DATA).
Notes
Performance:
The runtime required to sort an internal table increases with the number of entries and the length of the sort key.
Sorting an internal table with 100 entries with a 50 byte key requires about 1300 msn (standardized microseconds). Using a 30-byte key, the runtime is about 950 msn.
If one of the specified sort criteria is itself an internal table, SORT may sometimes take much longer.
The runtime increases if you use a stable sort.
Physical sorting reduces the runtime required for subsequent sequential processing.
Reward If Useful.
Regards,
Chitra -
Goldengate Extracts reads slow during Table Data Archiving and Index Rebuilding Operations.
We have configured OGG on a near-DR server. The extracts are configured to work in ALO Mode.
During the day, extracts work as expected and are in sync. But during any dialy maintenance task, the extracts starts lagging, and read the same archives very slow.
This usually happens during Table Data Archiving (DELETE from prod tables, INSERT into history tables) and during Index Rebuilding on those tables.
Points to be noted:
1) The Tables on which Archiving is done and whose Indexes are rebuilt are not captured by GoldenGate Extract.
2) The extracts are configured to capture DML opeartions. Only INSERT and UPDATE operations are captured, DELETES are ignored by the extracts. Also DDL extraction is not configured.
3) There is no connection to PROD or DR Database
4) System functions normally all the time, but just during table data archiving and index rebuild it starts lagging.
Q 1. As mentioned above, even though the tables are not a part of capture, the extracts lags ? What are the possible reasons for the lag ?
Q 2. I understand that Index Rebuild is a DDL operation, then too it induces a lag into the system. how ?
Q 3. We have been trying to find a way to overcome the lag, which ideally shouldn't have arised. Is there any extract parameter or some work around for this situation ?Hi Nick.W,
The amount of redo logs generated is huge. Approximately 200-250 GB in 45-60 minutes.
I agree that the extract has to parse the extra object-id's. During the day, there is a redo switch every 2-3 minutes. The source is a 3-Node RAC. So approximately, 80-90 archives generated in an hour.
The reason to mention this was, that while reading these archives also, the extract would be parsing extra Object ID's, as we are capturing data only for 3 tables. The effect of parsing extract object id's should have been seen during the day also. The reason being archive size is same, amount of data is same, the number of records to be scanned is same.
The extract slows down and read at half the speed. If normally it would take 45-50 secs to read an archive log of normal day functioning, then it would take approx 90-100 secs to read the archives of the mentioned activities.
Regarding the 3rd point,
a. The extract is a classic extract, the archived logs are on local file system. No ASM, NO SAN/NAS.
b. We have added "TRANLOGOPTIONS BUFSIZE" parameter in our extract. We'll update as soon as we see any kind of improvements. -
MHKIM:AR_PAYMENT_SCHEDULES_ALL TABLE의 DATA생성 로직
제품: FIN_AR
작성날짜 : 2006-11-03
AR_PAYMENT_SCHEDULES_ALL TABLE의 DATA생성 로직
=========================================
Explanation
입력된 Transaction이 Complete되는 시점에 AR_PAYMENT_SCHEDULES_ALL table에
관련 Data row가 생성되며, Due_date는 입력된 Transaction의 due_date를 끌고 와서
생성됩니다.
그 이후에 Transaction data를 Incomplete했을 경우,
기존에 생성되었던 AR_PAYMENT_SCHEDULES_ALL table data는 삭제되고,
다시 Complete시점에 재 계산되면서 table에 insert되는 것입니다.
Here is the way, the due date is calculated in AR for a transaction. First of
all, you create a transaction. Depending on the payment term and the
transaction date, the due date will be calculated. Now when you COMPLETE the
transaction , the payment schedules are getting created.
Now this due date data will be stored in the ar_payment_schedules table, that is initially
created for this transaction.
Now, you go and update the due date of this transaction. Which means that you
are updating the due_date column of the payment schedule record of this
transaction. That is fine till now.
Now you query this transaction in the trx. work bench and when you Incomplete
the invoice, the AR_PAYMENT_SCHEDULES record(s) gets DELETED. (Hence, the
updated value containing data gets DELETED).
Now, when again you go and COMPLETE the invoice, the payment schedule record
gets RE-CREATED with the due_date calculated based on the payment term and the
trx_date. Hence this due date will NOT contain the updated due_date
information. Rather, it would have the due date calculated based on the trx
date and the terms attached to it. -
Error in import table data using oracle datapump
i am trying to import table data using oracle datapump
CREATE TABLE emp_xt (
ID NUMBER,
NAME VARCHAR2(30)
ORGANIZATION EXTERNAL (
TYPE ORACLE_DATAPUMP
DEFAULT DIRECTORY backup
LOCATION ('a.dmp')
it return the following error
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-04084: The ORACLE_DATAPUMP access driver does not support the ROWID column.
ORA-06512: at "SYS.ORACLE_DATAPUMP", line 19
please help me.dmp file generated from exp command file not from oracle_datapump
-
Hi All,
I was created program to generate Dynamic field catalog and internal table, here is confusion how to display below mentioned internal table data to final dynamic internal table based on document type headings
This is Statsic internal table Data
1 0000123 GBP DA S 6265.45
2 0000123 GBP DA H 240.51
3 0000123 GBP DA S 35.82
4 0000123 GBP D1 H 281.85
5 0000123 GBP D1 S 23.41
6 0000123 GBP D1 S 34.23
7 0000123 GBP RV H 97.02
8 0000123 GBP RV S 52.90
9 0000123 GBP RV S 148.31
Can anybody suggest me how to populate the amount based on Document type and posting (credit or debit)
For Example
CUST DAC DAD D1C D1D RVC RVD
123 6265.45 240.51
35.82
Thanks in Advance
Sekhar
Moderator message: please do not open multiple threads for the same or similar issue.
Edited by: Thomas Zloch on Dec 9, 2011 4:17 PMHi All,
I was created program to generate Dynamic field catalog and internal table, here is confusion how to display below mentioned internal table data to final dynamic internal table based on document type headings
This is Statsic internal table Data
1 0000123 GBP DA S 6265.45
2 0000123 GBP DA H 240.51
3 0000123 GBP DA S 35.82
4 0000123 GBP D1 H 281.85
5 0000123 GBP D1 S 23.41
6 0000123 GBP D1 S 34.23
7 0000123 GBP RV H 97.02
8 0000123 GBP RV S 52.90
9 0000123 GBP RV S 148.31
Can anybody suggest me how to populate the amount based on Document type and posting (credit or debit)
For Example
CUST DAC DAD D1C D1D RVC RVD
123 6265.45 240.51
35.82
Thanks in Advance
Sekhar
Moderator message: please do not open multiple threads for the same or similar issue.
Edited by: Thomas Zloch on Dec 9, 2011 4:17 PM
Maybe you are looking for
-
How to create the stack in abap
like data structures in i want to build the stackthe situation is like in the table vekp we will accpt exidv now for this exidv there are corresponding venum's in the vepo. now i want to display the all elements of venum. the venum can also contain t
-
How to keep a backup of olm data file of microsoft office in mac
Hello I am a new user of mac and need help in keeping a backup file of olm file of microsoft office Outlook. I bought my macbook pro 3 days back and imported my old mails from .pst file of my windows laptop and now need to keep the backup of olm file
-
Hello, In the form I have 2 blocks, Activities and Parts, both split between 2 tabs. On tab1 I have the blocks' items and on tab2 the mirrors of them which synchronize with main items. Now, when I move from one block to the other the tab page changes
-
ServerRuntimeMBean.isAdminServer()
Hi, in our portal application, we need to check if the administration server is up. We've got the following method : public static boolean isAdminServerAlive(HttpServletRequest request) { boolean isAlive = false; MBeanHome adminBeanHome = null; // RÃ
-
Doesn't import jpgs anymore!!
I've just had to do a massive re-install after my directory got corrupted. Everything's now back to normal - except iPhoto. I can see the thumbnails fine (and they tell me the file size and format), but when I go to click on an image to see the hi-re