Instead of collect .
How you any idea about below code . I want to collect netwr and fkimg fields interms of different ytab-vblen . Performs are convert currency and it put into netwr_y and fkimg_y fields .
LOOP AT ytab .
move-corresponding ytab to son_tab .
clear vbrp.
SELECT * FROM vbrp
WHERE matnr = ytab-matnr.
son_tab-vbeln = vbrp-vbeln .
son_tab-posnr = vbrp-posnr .
son_tab-fbuda = vbrp-fbuda .
son_tab-fkimg = vbrp-fkimg .
son_tab-netwr = vbrp-netwr .
CLEAR vbrk .
SELECT SINGLE * FROM vbrk
WHERE vbeln = son_tab-vbeln .
son_tab-waerk = vbrk-waerk .
son_tab-fkart = vbrk-fkart .
IF vbrk-fkart = 'G2' OR vbrk-fkart = 'RE'.
son_tab-fkimg = son_tab-fkimg * -1 .
son_tab-netwr = son_tab-netwr * -1 .
ENDIF.
IF son_tab-waerk EQ lv_local_currency .
PERFORM cnt_fbuda .
ENDIF.
IF son_tab-waerk NE lv_local_currency .
PERFORM para_birimi.
ENDIF.
fatura = 0.
miktar = 0 .
fatura = netwr_y + fatura .
miktar = fkimg_y + miktar .
son_tab-netwr = fatura .
son_tab-fkimg = miktar .
ENDSELECT.
ENDLOOP .
Hi,
If you need to collect at differ ytab-vblen use the <b>Control level statements</b> AT NEW & ENDAT.
Ex:
sort ytab by vbeln.
LOOP AT ytab.
AT NEW vblen.
COLLECT ytab.
ENDAT.
ENDLOOP.
Raja T
Message was edited by:
Raja T
Similar Messages
-
Creating a collection from a list and joining the list to a database table
I would like to have opinions on good ways to process rows, based on a provided list of key values, joining the collected list against a source table to retrieve additional information related to the key. In this simple example, the procedure accepts a list of employee numbers. The goal is to print a list of names associated with those numbers. The method is to materialize the list of employee numbers as rows and join those rows to a source table to get the names. I have used BULK COLLECT. I don't know if this is a good approach and I would value suggestions. I would also like to understand why we cannot cast PLSQL tables using a type defined in the procedure's specification (why the type needs to exist as an object before we can cast it, like this:
SELECT * FROM TABLE ( CAST ( SOME_FUNCTION(&some parameter) AS SOME_TYPE ) );
Anyway, here is my demo SQL, which you should be able to execute against the SCOTT schema without any changes. Thanks for your help!
declare
type employee_numbers is table of emp.empno%type index by binary_integer;
type employee_names is table of emp.ename%type index by binary_integer;
type employees_record is record (empno employee_numbers, person_name employee_names);
records employees_record;
employees_cursor sys_refcursor;
employee_number_list varchar2(30) default '7369,7499,7521';
begin
open employees_cursor for
with t as (
select regexp_substr(employee_number_list, '[^,]+', 1, level) as employee_number
from dual
connect by regexp_substr(employee_number_list, '[^,]+', 1, level) is not null
) select emp.empno, emp.ename
from t join emp on (emp.empno = t.employee_number)
order by 2
fetch employees_cursor bulk collect into records.empno, records.person_name;
dbms_output.put_line('number of records: '||records.empno.count());
for i in 1 .. records.empno.count
loop
dbms_output.put_line(chr(39)||records.empno(i)||chr(39)||','||chr(39)||records.person_name(i)||chr(39));
end loop;
end;>
It looks like I have confirmation that BULK COLLECT is a good way to go collect rows for processing
>
Well maybe and maybe not. Bear in mind that those demos were only basic demos for the purpose of illustrating how functionality CAN be used. They do not tell you WHEN to use them.
BULK COLLECT uses expensive PGA memory and unless you know that only a small number of rows will be collected you can have some serious memory issues. Any heavy duty use of BULK COLLECT should generally have a LIMIT clause to limit the number of elements in the collection for each loop iteration.
Always use SQL if possible.
Also, for your use case you might be bette served using a PIPELINED function. Instead of collecting ALL rows into a nested table as in your example a PIPELINED function returns one row at a time but is still used as if it were a table using the same TABLE operator.
Here is simple example code for a PIPELINED function
-- type to match emp record
create or replace type emp_scalar_type as object
(EMPNO NUMBER(4) ,
ENAME VARCHAR2(10),
JOB VARCHAR2(9),
MGR NUMBER(4),
HIREDATE DATE,
SAL NUMBER(7, 2),
COMM NUMBER(7, 2),
DEPTNO NUMBER(2)
-- table of emp records
create or replace type emp_table_type as table of emp_scalar_type
-- pipelined function
create or replace function get_emp( p_deptno in number )
return emp_table_type
PIPELINED
as
TYPE EmpCurTyp IS REF CURSOR RETURN emp%ROWTYPE;
emp_cv EmpCurTyp;
l_rec emp%rowtype;
begin
open emp_cv for select * from emp where deptno = p_deptno;
loop
fetch emp_cv into l_rec;
exit when (emp_cv%notfound);
pipe row( emp_scalar_type( l_rec.empno, LOWER(l_rec.ename),
l_rec.job, l_rec.mgr, l_rec.hiredate, l_rec.sal, l_rec.comm, l_rec.deptno ) );
end loop;
return;
end;
select * from table(get_emp(20)) -
Configuring Kodo default implementation for field of Collection type
If I am not mistaken default implementation for field of Collection type in
Kodo is
LinkedList based proxy. It would be great if it were possible to configure
Kodo to use a proxy of my choosing
I did some tests and it seems to me that ArrayList is much more efficient
than Linked list (see below)
Is there any specific reason I am not aware of that makes LinkedList better
than array list
In my applications all collections a relatively small (or at least most of
my collections are definitely small)
and since I use Collection interface there is no inserts into middle of my
collections - only appends (which ArrayList handles very well)
So my question is can I make Kodo to use ArrayListProxy for fields of
Collection type
(except of course using ArrayList field instead of Collection which I do not
want to do)
below is some statistics on collection performance (populating and iterating
collections)
the same test against 3 collections implementations (JDK 1.4.1)
Not only ArrayList by far the fastest and memory friendly it also garbage
collected much sooner and better -
I show here max memory consumption and last to would not be garbage
collected till all memory is in use (old generation GC)
and ArrayList seems to be collected by young gen GC because it was collected
very quickly between test cycles why other only when all memory was used
So please make ArrayList your default collection implementation :-)
Small collection size (40)
time(ms) memory(kb)
ArrayList 5,218 62,154
LinkedList 14,125 240,066
HashSet 27,000 311,825
the same but using using random inserts - append(index, object) rather than
append(object):
ArrayList 8937, 53591
LinkedList 15047, 240066
Larger collection size (200)
ArrayList 4860, 47709
LinkedList 18468, 290704
HashSet 34391, 422282
the same but using using random inserts - append(index, object) rather than
append(object):
ArrayList 11844, 47709
LinkedList 25766, 290704You should be able to accomplish this fairly easily by extending
SimpleProxyManager:
http://solarmetric.com/Software/Documentation/2.4.3/docs/javadoc/com/solarmetric/kodo/util/SimpleProxyManager.html
and overriding the appropriate methods (getCollectionCopy and
getCollectionProxy).
On Mon, 12 May 2003 12:26:21 -0400, Alex Roytman wrote:
If I am not mistaken default implementation for field of Collection type in
Kodo is
LinkedList based proxy. It would be great if it were possible to configure
Kodo to use a proxy of my choosing
I did some tests and it seems to me that ArrayList is much more efficient
than Linked list (see below)
Is there any specific reason I am not aware of that makes LinkedList better
than array list
In my applications all collections a relatively small (or at least most of
my collections are definitely small)
and since I use Collection interface there is no inserts into middle of my
collections - only appends (which ArrayList handles very well)
So my question is can I make Kodo to use ArrayListProxy for fields of
Collection type
(except of course using ArrayList field instead of Collection which I do not
want to do)
below is some statistics on collection performance (populating and iterating
collections)
the same test against 3 collections implementations (JDK 1.4.1)
Not only ArrayList by far the fastest and memory friendly it also garbage
collected much sooner and better -
I show here max memory consumption and last to would not be garbage
collected till all memory is in use (old generation GC)
and ArrayList seems to be collected by young gen GC because it was collected
very quickly between test cycles why other only when all memory was used
So please make ArrayList your default collection implementation :-)
Small collection size (40)
time(ms) memory(kb)
ArrayList 5,218 62,154
LinkedList 14,125 240,066
HashSet 27,000 311,825
the same but using using random inserts - append(index, object) rather than
append(object):
ArrayList 8937, 53591
LinkedList 15047, 240066
Larger collection size (200)
ArrayList 4860, 47709
LinkedList 18468, 290704
HashSet 34391, 422282
the same but using using random inserts - append(index, object) rather than
append(object):
ArrayList 11844, 47709
LinkedList 25766, 290704 -
Dear Gurus,
When Im trying to create collective invoice using vfo4 for few delivery related invoices with same payer, same billing date and same materials,same terms of payments its is creating individual invoices with collective invoice option instead of collective single invoice.
Please help me in resolving the issue.
Thanks in advance
AnushHi raja,
Thanks for mail.for my delivery related invoices all th above u mentioned are same.
1) Payer
2) Inco term
3) Payment term
4) Actual GI Date from Delivery
5) Shipping Condition
6) Account Assignment Group
7)Bill to party
Regards
Aunsh -
Performance ora:view and collection()
ora:view allows an XML structured table to be used in a XQuery "for" clause, while collection processes a set of documents in a directory. Normally these are two different sets of data.
However, with XDB I can have a WEBDAV directory which can consist of documents, all of which are stored in schema based structured storage.
Given this situation, where the ora:view and the collection() happen to be the exact same set of data (and this condition may be a rare corner case for many people), is there a performance difference between these two methods of access?
The underlying question I am wondering is whether collection() will be rewritten to take advantage of any indexing on the structured data, in this sitution.
A collection() directory can consist of XML in both structured and unstructured storage, in this case does the query reduce everything to the lowest common denominator (CLOB), or are any of the advantages of structured storage utilized?
Thanks,
HowardHoward,
That's a good question. You should use ora:view() instead of collection(). Using ora:view() will give you better performance. Please contact me directly (geoff dot lee at oracle dot com) to discuss further.
Regards,
Geoff -
Regarding COLLECT stmt usage in an ABAP Program.
Hi All,
Could anyone please explain if the COLLECT statement really hampers the performance of the program, to a large extent.
If it is so, please explain how the performance can be improved with out using the same.
Thanks & Regards,
Goutham.COLLECT allows you to create unique or summarized datasets. The system first tries to find a table entry corresponding to the table key. (See also Defining Keys for Internal Tables). The key values are taken either from the header line of the internal table itab, or from the explicitly-specified work area wa. The line type of itab must be flat - that is, it cannot itself contain any internal tables. All the components that do not belong to the key must be numeric types ( ABAP Numeric Types).
Notes
COLLECT allows you to create a unique or summarized dataset, and you should only use it when this is necessary. If neither of these characteristics are required, or where the nature of the table in the application means that it is impossible for duplicate entries to occur, you should use INSERT [wa INTO] TABLE itab instead of COLLECT. If you do need the table to be unique or summarized, COLLECT is the most efficient way to achieve it.
If you use COLLECT with a work area, the work area must be compatible with the line type of the internal table.
If you edit a standard table using COLLECT, you should only use the COLLECT or MODIFY ... TRANSPORTING f1 f2 ... statements (where none of f1, f2, ... may be in the key). Only then can you be sure that:
-The internal table actually is unique or summarized
-COLLECT runs efficiently. The check whether the dataset
already contains an entry with the same key has a constant
search time (hash procedure).
If you use any other table modification statements, the check for entries in the dataset with the same key can only run using a linear search (and will accordingly take longer). You can use the function module ABL_TABLE_HASH_STATE to test whether the COLLECT has a constant or linear search time for a given standard table.
Example
Summarized sales figures by company:
TYPES: BEGIN OF COMPANY,
NAME(20) TYPE C,
SALES TYPE I,
END OF COMPANY.
DATA: COMP TYPE COMPANY,
COMPTAB TYPE HASHED TABLE OF COMPANY
WITH UNIQUE KEY NAME.
COMP-NAME = 'Duck'. COMP-SALES = 10. COLLECT COMP INTO COMPTAB.
COMP-NAME = 'Tiger'. COMP-SALES = 20. COLLECT COMP INTO COMPTAB.
COMP-NAME = 'Duck'. COMP-SALES = 30. COLLECT COMP INTO COMPTAB.
Table COMPTAB now has the following contents:
NAME | SALES
Duck | 40
Tiger | 20 -
I keep all my image folders in one Master Catalog on one disk.
My system of filing is to have a main folder, say "Airshows", with sub folders named "Original-RAW Files" and "Finished Files".
Each sub folder is then further sub divided into "Show 1", "Show 2", "Show 3", etc.
This system also enable me to duplicate my files in such a way that they can be accessed by programmes other than Lightroom.
I also like to make Collections of specific planes which appear and therefore the Collections can include images taken at any number of individual shows but as LR3 keeps the Folders and the Collections in different sections, this entails looking in two different locations for any images taken at Airshows.Another disadvantage is that I believe LR Collections cannot be accessed by other third party programmes should I wish to change from Adobe.
I would like maintain my current filing system (which suits my purposes), but wish to find a way to file say,"Spitfire" Collection as "Airshow\Finished Files\Spitfire" so that all files are kept together.
Is there a way to achieve this without resorting to making duplicate image files ?I would like maintain my current filing system (which suits my purposes), but wish to find a way to file say,"Spitfire" Collection as "Airshow\Finished Files\Spitfire" so that all files are kept together.
Is there a way to achieve this without resorting to making duplicate image files ?
This is one of Lightroom's great strengths. You do not need duplicate image files to achieve a "Spitfire" collection (or keyword), so photos from many different folders can become part of the "Spitfire" collection. However, if your requirement is that you want this Spitfire collection to be in its own folder, then you will necessarily have to create duplicates.
Your problem illustrates the limitation of organizing via folders, and cannot be overcome unless you eliminate the requirement that "Spitfire" be in its own folders. Some people create, temporarily, a Spitfire folder, export the collection to this Spitfire folder, display the photos via whatever mechanism you want, and when it is not longer needed, eliminate the folder. Of course, another option is to browse/display your Spitfire photos inside of Lightroom, thus eliminating the need for a new folder containing duplicates of your photos.
I believe LR Collections cannot be accessed by other third party programmes should I wish to change from Adobe
This is correct, and in my opinion, is a good reason to use keywords instead of collections for this purpose. Keywords can be transferred to any other photographic software, and can stay with the photo. -
hi all ,
i want to add net value of all the line items which are in the same group
am using collect but am unable to do that.
plz suggest me what to do?
DATA: BEGIN OF del_grp_data occurs 0,
vbeln like vbap-vbeln, " Sales document
grkor like vbap-grkor, " Delivery group
netwr like vbap-netwr, "net value
posnr like vbap-posnr, " Sales document item
End OF del_grp_data.
SELECT vbeln grkor pstyv netwr
posnr
FROM vbap
INTO corresponding fields of TABLE del_grp_data
FOR ALL ENTRIES IN orders_vbeln
WHERE vbeln eq orders_vbeln-vbeln.
loop at del_grp_data.
collect ord_grp_data .
endloop.
Regards,
Amit.Basic form
COLLECT [wa INTO] itab.
Addition:
... SORTED BY f
Cannot Use Short Forms in Line Operations.
Effect
COLLECT allows you to create unique or summarized datasets. The system first tries to find a table entry corresponding to the table key. (See also Defining Keys for Internal Tables). The key values are taken either from the header line of the internal table itab, or from the explicitly-specified work area wa. The line type of itab must be flat - that is, it cannot itself contain any internal tables. All the components that do not belong to the key must be numeric types ( ABAP Numeric Types).
If the system finds an entry, the numeric fields that are not part of the table key (see ABAPNumeric Types) are added to the sum total of the existing entries. If it does not find an entry, the system creates a new entry instead.
The way in which the system finds the entries depends on the type of the internal table:
STANDARD TABLE:
The system creates a temporary hash administration for the table to find the entries. This means that the runtime required to find them does not depend on the number of table entries. The administration is temporary, since it is invalidated by operations like DELETE, INSERT, MODIFY, SORT, ...). A subsequent COLLECT is then no longer independent of the table size, because the system has to use a linear search to find entries. For this reason, you should only use COLLECT to fill standard tables. U
SORTED TABLE:
The system uses a binary search to find the entries. There is a logarithmic relationship between the number of table entries and the search time.
HASHED TABLE:
The system uses the internal hash administration of the table to find records. Since (unlike standard tables), this remains intact even after table modification operations, the search time is always dependent on the number of table entries.
For standard tables and SORTED TABLEs, the system field SY-TABIX contains the number of the existing or newly-added table entry after the APPEND. With HASHED TABLEs, SY-TABIX is set to 0.
Notes
COLLECT allows you to create a unique or summarized dataset, and you should only use it when this is necessary. If neither of these characteristics are required, or where the nature of the table in the application means that it is impossible for duplicate entries to occur, you should use INSERT [wa INTO] TABLE itab instead of COLLECT. If you do need the table to be unique or summarized, COLLECT is the most efficient way to achieve it.
If you use COLLECT with a work area, the work area must be compatible with the line type of the internal table.
If you edit a standard table using COLLECT, you should only use the COLLECT or MODIFY ... TRANSPORTING f1 f2 ... statements (where none of f1, f2, ... may be in the key) enthalten sein). Only then can you be sure that:
-The internal table actually is unique or summarized
-COLLECT runs efficiently. The check whether the dataset
already contains an entry with the same key has a constant
search time (hash procedure).
If you use any other table modification statements, the check for entries in the dataset with the same key can only run using a linear search (and will accordingly take longer). You can use the function module ABL_TABLE_HASH_STATE to test whether the COLLECT has a constant or linear search time for a given standard table.
Example
Summarized sales figures by company:
TYPES: BEGIN OF COMPANY,
NAME(20) TYPE C,
SALES TYPE I,
END OF COMPANY.
DATA: COMP TYPE COMPANY,
COMPTAB TYPE HASHED TABLE OF COMPANY
WITH UNIQUE KEY NAME.
COMP-NAME = 'Duck'. COMP-SALES = 10. COLLECT COMP INTO COMPTAB.
COMP-NAME = 'Tiger'. COMP-SALES = 20. COLLECT COMP INTO COMPTAB.
COMP-NAME = 'Duck'. COMP-SALES = 30. COLLECT COMP INTO COMPTAB.
Table COMPTAB now has the following contents:
NAME | SALES
Duck | 40
Tiger | 20
Addition
... SORTED BY f
Effect
COLLECT ... SORTED BY f is obsolete, and should no longer be used. It only applies to standard tables, and has the same function as APPEND ... SORTED BY f, which you should use instead. (See also Obsolete Language Elements).
Note
Performance:
Avoid unnecessary assignments to the header line when using internal tables with a header line. Whenever possible, use statements that have an explicit work area.
For example, " APPEND wa TO itab." is approximately twice as fast as " itab = wa. APPEND itab.". The same applies to COLLECT and INSERT.
The runtime of a COLLECT increases with the width of the table key and the number of numeric fields whose contents are summated.
Note
Non-Catchable Exceptions:
COLLECT_OVERFLOW: Overflow in an integer field during addition
COLLECT_OVERFLOW_TYPE_P: Overflow in a type P field during addition.
TABLE_COLLECT_CHAR_IN_FUNCTION: COLLECT on a non-numeric field.
Related
APPEND, WRITE ... TO, MODIFY, INSERT
Additional help
Inserting SummarizedTable Lines -
I often forget to include lastest import to a collection.
I also often - by mistake - work in folders instead of collection. Often I only recon this when moving files and get warned about moving psysical files (then I know I'm in folders).
Suggestion:
- Make the appearance of folders and collection more distinct (different)
- After import always prompt to place in collection (or create one + add to collection). This feature should be optional.
- Support more that latest import; i.e. latest 20 imports, so I can go back and find the imports that misses a collection.Provide the name of the program you are using so a Moderator may move this message to the correct program forum
The Cloud is not a program, it is a delivery process... a program would be Photoshop or Lighroom or Muse or ??? -
Lightroom 3, Bridge, Catalogues & Collections
I recently bought Lightroom 3. I had not used Lightroom before. After reading various publications and the help manual, I am still fuzzy about catalogs vs. collections. If I want to organize my photos thematically (e.g., flowers, lighthouses, rocks, water, pets, oceans, rural, city, etc.), would it be better to have a catalog for each or have a collection for each? What are the advantages and disadvantages for each? Using Bridge before, my photos are organized by location and, within that, by date (e.g., province, garden, country, etc.). With Bridge I only made collections of my best shots.
Before I really dive into Lightroom, I would like to set it up properly. So all constructive help is greatly appreciated.
Thank you!
JB DayleImportant rule: you only want one catalog. A catalog is not an organizing tool. There are no Lightroom functions that work across multiple catalogs, so if you want to find some pictures which are split between to catalogs, you cannot do so.
You probably want to start with keywords instead of collections. Keywords allow you to assign as many descriptors as you would like to each photo, but a photo can be in only a single folder. Once you have assigned the keyword descriptors to photos, finding those photos quick and easy.
Keywords can be shared with other applications, collections cannot. You can search for photos with two or more keywords in the filter bar or in a smart collecion; you can search for photos in two or more collections only in a smart collection. -
JTable + Hashtable + Collection doesn't seem to work properly
Hi folks,
I'm having a problem with JTree class. Basically, I want to instantiate it by using an Hashtable, though
depending on the fact that I use collection or arrays the result can substantially vary.
Take the following use case:
import javax.swing.*;
import java.util.*;
public class Tree extends JApplet{
public Tree(){
super();
setSize(300, 250);
init();
setVisible(true);
public void init(){
Hashtable<String,List<String>> hash = new Hashtable<String,List<String>>();
hash.put("Foo", Arrays.asList("Bar"));
JTree tree = new JTree(hash);
getContentPane().add(tree);
}This displays a wrong tree, in fact you should have a Foo node that contains a Bar leaf.
Now if I use array instead of collections (generics are not the problem) everything work just as
expected!?
import javax.swing.*;
import java.util.*;
public class Tree extends JApplet{
public Tree(){
super();
setSize(300, 250);
init();
setVisible(true);
public void init(){
Hashtable<String,String [] > hash = new Hashtable<String,String []>();
hash.put("Foo", new String []{"Bar"});
JTree tree = new JTree(hash);
getContentPane().add(tree);
}Considering that the constructor of JTree allows the following instantiation for Hastable:
public JTree(Hashtable<?,?> value)the use of wildcards let me assume that I can put whatever I like as the type of my keys/values.
Am I missing something or this is indeed some very strange (buggy) behavior.
Cheers,
Mircolins314159 wrote:
[Javadocs for createChildren method|http://java.sun.com/javase/6/docs/api/javax/swing/JTree.DynamicUtilTreeNode.html#createChildren%28javax.swing.tree.DefaultMutableTreeNode,%20java.lang.Object%29]
If you look in the source code for JTree, there's no mention of Lists in there. Not quite sure why they handle Vectors and Hashtables only instead of Lists and Maps.Hi lins,
thanks for taking the time for giving me your though.
I have take a glance at the javadoc for method createChildren, but once again it tells me that if I have an entry of the type
<A,Collection<B>> I should get a Node named after the object of type A and whose leaf are the ones contained in the collection.
I agree with you that is bizarre that the JTree constructor only accept Hashtable (and not Map, that appear to me more convenient);
but I can live with that as long as the value field of the hashtable behave as I would expect.
I'm not an experienced Java programmer, so if any of you gurus has any thoughts on that, please let me know.
Cheers,
Mirco
Edited by: Mirco on Feb 29, 2008 12:07 AM -
Help with a COLLECT statement.
I had to make changes to some code and these changes required me to add some fields to an internal table. Below is what the table looked like before I made any changes:
DATA : BEGIN OF t_frgroup OCCURS 0,
BEGIN CHANGE 02/11/03
hazmat TYPE c,
END CHANGE 02/11/03
mfrgr LIKE lips-mfrgr,
brgew LIKE lips-brgew,
lfimg LIKE lips-lfimg,
qtypal LIKE w_nbr_palletsx,
qtypce LIKE w_nbr_palletsx,
vstel LIKE likp-vstel,
no_cnvrt TYPE c,
END OF t_frgroup.
This is what it looked like after I made the changes:
DATA : BEGIN OF t_frgroup OCCURS 0,
BEGIN CHANGE 02/11/03
hazmat TYPE c,
END CHANGE 02/11/03
mfrgr LIKE lips-mfrgr,
brgew LIKE lips-brgew,
lfimg LIKE lips-lfimg,
qtypal LIKE w_nbr_palletsx,
qtypce LIKE w_nbr_palletsx,
vstel LIKE likp-vstel,
no_cnvrt TYPE c,
matnr TYPE lips-matnr,
vbeln TYPE lips-vbeln,
posnr TYPE lips-posnr,
qty LIKE vblkp-lfimg,
vrkme LIKE lips-vrkme,
converted(1) TYPE c VALUE 'N',
END OF t_frgroup.
My issue is, after adding those fields, my collect statement no longer works:
LOOP AT t_lips.
MOVE-CORRESPONDING t_lips TO t_frgroup.
COLLECT t_frgroup.
ENDLOOP.
I need it to collect with the key being mfrgr. How can I do this? After adding the fields the collect statement now acts as an insert (I assume that matnr is now acting as the key) instead of collect.
Regards,
AaronHi Aaron,
1. Define the table keys while defining your internal table.
2. The order of the fields in the structure should be that the key fields come first , then the quantity fields and amount fields next.
3. Sort the table by the key fields before the loop.
The collect statment is creating news entries because If the system finds an entry with the key fields , the numeric fields that are not part of the table key are added to the sum total of the existing entries. If it does not find an entry, the system creates a new entry instead. Clearly the system is unable to find the existing entry because the key fields are not defined in your internal table or the fields are are out of order.
Hope this helps.
A simple example depicting this is as follows :
TYPES: BEGIN OF COMPANY,
NAME(20) TYPE C,
SALES TYPE I,
END OF COMPANY.
DATA: COMP TYPE COMPANY,
COMPTAB TYPE HASHED TABLE OF COMPANY
WITH UNIQUE KEY NAME.
COMP-NAME = 'Duck'. COMP-SALES = 10. COLLECT COMP INTO COMPTAB.
COMP-NAME = 'Tiger'. COMP-SALES = 20. COLLECT COMP INTO COMPTAB.
COMP-NAME = 'Duck'. COMP-SALES = 30. COLLECT COMP INTO COMPTAB.
regards,
Advait Gode.
Edited by: Advait Gode on Mar 28, 2008 3:50 PM -
Libreadline error when collect -c on?
Hi,
I am trying to record count data for my appliction using collect's "-c on" option. I compile my application with the -xbinopt=prepare flag. But, when I try to run the measurement I get the following:
$ collect -c on ./tests/my_app
bit (warning): Can't operate on /usr/local/lib/libreadline.so.5, it is not compiled with -xbinopt=prepare.
bit (warning): Can't operate on /opt/SUNWspro/prod/lib/stlport4/v9/libstlport.so.1, it is not compiled with -xbinopt=prepare.
bit: (warning): /DEV/TMP/ictest/SUNW_Bit_Cache/count/DEV/TMP/ictest/tests/c6c33ab6e1c3c0a148f16bf30d808cbc_1235495206/my_app: 61.6% of code instrumented (7127 out of 11157 functions).
ld.so.1: my_app: fatal: libreadline.so.5: open failed: No such file or directory
bit (error): No instrdata file for main target.
Any idea how to get rid of this error?
My machine has:
SunOS 5.10 Generic_127111-09 sun4v sparc SUNW,SPARC-Enterprise-T5220
CC: Sun C++ 5.9 SunOS_sparc 2007/05/03
Sun Dbx Debugger 7.6 SunOS_sparc 2007/05/03
analyzer: Sun Analyzer 7.6 SunOS_sparc 2007/05/03
Regards,
-Ippokratis.It worked by calling "bit collect" instead of collect.
-
Mini Displayport to DVI Adapter not sending signal to DVI Monitor
I have a dell monitor connected to my Thunderbolt Display that will not get a signal. The component string looks like this
MBP - Thunderbolt Display- Mini Displayport to DVI adapter- Dell 22" DVI monitor.
I would like to use this monitor as a third monitor instead of collecting dust. Any help would be appreciated.
Thanks,Hello fuzzymelton,
The article linked below details a number of useful troubleshooting steps that may help get the additional monitor to function.
Apple computers: Troubleshooting issues with video on internal or external displays
http://support.apple.com/kb/HT1573
Cheers,
Allen -
Performance with Apex 3.2.1
Hello All,
I am using oracle apex 3.2.1 oracle 10g.
We are Using interactive reports using collections that involves maximum 5 tables with outer joins. when the total number of records are huge - the report is taking more time ie almost 30 minutes that shows 20,000 odd records .
and we are using EGY extended gate way .
Can any one suggest any methods to improve the performance ?
I got few questions need clarification.
1)How the apex works in retrieval of records from DB while displaying the records.
2)Will the performance improves if we use temporary tables instead of collections.
3) What are the different mechanisms we can use in apex to improve the performance in data retrieval .
Thanks/kumar
=========PLEASE DO NOT POST A NEW POSTING THAT DEALS WITH AN OPEN QUESTION YOU HAVE POSTED. This is VERY annoying and wasteful of the forum users time. If you need, you can ADD this to your existing posting...
Thank you,
Tony Miller
Webster, TX
Maybe you are looking for
-
Slow computer after logicboard replacement/ any way to retrieve 18 month old receipts
I purchased a Macbook Pro in August of 2008 and have been generally pleased with it. About a month ago it stopped turning on so I brought it in to the Apple store. They claimed that the video card was fried and that they would need to replace the ent
-
How do I disable Check out dialog when opening Documents from SharePoint 2010
Hi When I open a pdf document form a SharePoint Server 2010 in the client application (Adobe Reader X) I'm always promted to check out the document to be able to edit it. However in most cases I just want to view the document and the promt is a hinde
-
Error in PL/SQL generated package
Hello, With the help of ODM (version 10,2,0,3,1 -Build 479) I created a SVM classification model, which works very fine. After that, I generated the PL/SQL package, which returns an ORA-06512 in the call into DBMS_DATA_MINING.CREATE_MODEL. I tried to
-
Ever since I used an apple wireless keyboard, the onscreen keyboard will not appear
I have been using the iPad for several years. Recently I added the apple wireless keyboard. Ever since, when I do not have the apple wireless keyboard the onscreen keyboard refuses to appear. Makes entering text difficult! Wuld be happy to learn of a
-
What is support desk tool?
Dear SAP Experts, What is support desk tool? And How can I execute it? Thanks and Best Regards, Zilla