Saving filter results in separate table?
So if there is not such posibility, how to achieve that? Which functions to use ?
Thnks in advance,
Bart
I'm not sure I entirely follow you, but perhaps you are looking for something like my solution from this thread:
http://discussions.apple.com/thread.jspa?messageID=5536581
You can download my example file from that thread directly with this link:
http://www.mediafire.com/?40lrg5dcywc
Refer to my postings on the thread for discussion of the solution.
Similar Messages
-
Getting Result from multiple table using code table.
I am having hard time getting data from different table not directly connected.
The data model is not proper. Why should you store the game details in separate tables?
IMO its just matter of putting them in same table with additional field gamename to indicate the game. That would have been much easier to query and retrieve results
Anyways in the current way what you can do is this
SELECT p.ID,p.Name,c.CategoryName AS [WinnerIn]
FROM GameParticipants p
INNER JOIN (SELECT ParticipantID, Value AS ParticipantName,'CGW Table' AS TableName
FROM CGWTable
UNION ALL
SELECT ParticipantID, Value AS ParticipantName,'FW Table' AS TableName
FROM FWTable
SELECT ParticipantID, Value AS ParticipantName,'WC Winner' AS TableName
FROM WCWinner
... all other winner tables
)v
ON v.ParticipantID = p.ID
AND v.ParticipantName = p.Name
INNER JOIN Category c
ON c.TableName = v.TableName
If you want you can also add a filter like
WHERE p.Name = @Name
to filter for particular player like Henry in your example
Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs -
Summarize or filter entries to another table
Hello,
please forgive my bad english but I am not a native speaker!
I have a table:
A..........B..............C..........D............E
Name.......CalendarWeek...Date.......Amount 1.....Amount 2
Tom........1..............05.01.09....100,00......20,00
Susan......1..............05.01.09.....50,00......00,00
Rob........1..............05.01.09....120,00......40,00
Tom........2..............13.01.09....140,00......30,00
Steve......2..............13.01.09....110,00......20,00
Tom........3..............22.01.09.....80,00......00,00
Susan......3..............22.01.09.....70,00......20,00
Tom........4..............29.01.09....100,00......20,00
Rob........4..............29.01.09....200,00......20,00
I would like to filter out one certain name and put all corresponding entries in another table, i.e. like this:
A..........B..............C..........D............E
Name.......CalendarWeek...Date.......Amount 1.....Amount 2
Tom........1..............05.01.09....100,00......20,00
Tom........2..............13.01.09....140,00......30,00
Tom........3..............22.01.09.....80,00......00,00
Tom........4..............29.01.09....100,00......20,00
I guess if I am able to do this I can figure out how to i.e. get certain weeks as well?
A..........B..............C..........D............E
Name.......CalendarWeek...Date.......Amount 1.....Amount 2
Tom........3..............22.01.09.....80,00......00,00
Susan......3..............22.01.09.....70,00......20,00
I don´t wanna reorganize or filter in the orginal table since I need the results saved on a new table for printing and archiving.
Thanks in advance
TomTom,
Another approach you may explore is to use the new Category feature of Numbers '09. Using your sample with a few more data rows, three identical tables were created.
The Employee Table is categorized first by name and sub-categorized by week number while the Weekly table is categorized in reverse. Both are sorted by date, but the rows jump into place based on the categories as they are added. The category rows have been filled in with a darker green while a lighter green was used to fill the sub-categories in order to follow the content of the table visually.
Adding rows to these tables can be somewhat unnerving because the completion of a cell will immediately move the new row to its proper category, so a third table, identical to the other two except that no categories are defined, is created. Data on these rows is copied to the bottom of the other two tables which become properly updated immediately.
Formatting category rows is quite limited, but you are able to put subtotals in appropriate cells and add a fill color to make them stand out. Whatever formatting you apply to one category row is automatically carried to the other category rows of that level. More information is, of course, found under "Help".
Also illustrated below is the capability of collapsing the categories so that only subtotals by week are shown.
pw -
How do I import one xml file into 3 separate tables in db?
I need to utilize xslt to import one xml file into 3 separate tables: account, accountAddress, streetAddress
*Notice the missing values in middleName, accountType
sample xml
<account>
<firstName>Joe</firstName>
<middleName></middleName>
<lastName>Torre</lastName>
<accountAddress>
<streetAddress>
<addressLine>myAddressLine1</addressLine>
<addressLine>myAddressLine2</addressLine>
</streetAddress>
<city>myCity</city>
<state>myState</state>
<postalCode>mypostalCode</postalCode>
</accountAddress>
<accountId>A001</accountId>
<accountType></accountType>
<account>
I need the following 3 results in 3 separate xml files in order for me to upload into my 3 tables.
Result #1
<rowset>
<row>
<firstName>Joe</firstName>
<lastName>Torre</lastName>
<accountId>A001</accountId>
<row>
<rowset>
Result #2
<rowset>
<row>
<addressId>1</address>
<city>myCity</city>
<state>myState</state>
<postalCode>myPostalCode</postalCode>
<row>
<rowset>
Result #3
<rowset>
<row>
<addressId>1</addressId>
<addressLineSeq>1</addressLineSeq>
<addressLine>myAddressLine1</addressLine>
<row>
<row>
<addressId>1</addressId>
<addressLineSeq>2</addressLineSeq>
<addressLine>myAddressLine2</addressLine>
<row>
<rowset>Use XSU to store in multiple tables.
"XSU can only store data in a single table. You can store XML across tables, however, by using the Oracle XSLT processor to transform a document into multiple documents and inserting them separately. You can also define views over multiple tables and perform insertions into the views. If a view is non-updatable (because of complex joins), then you can use INSTEAD OF triggers over the views to perform the inserts."
http://download-west.oracle.com/docs/cd/B19306_01/appdev.102/b14252/adx_j_xsu.htm#i1007013 -
Tables in subquery resulting in full table scans
Hi,
This is related to a p1 bug 13009447. Customer recently upgraded to 10G. Customer reported this type of problem for the second time.
Problem Description:
All the tables in sub-query are resulting in full table scans and hence are executing for hours.
Here is the query
SELECT /*+ PARALLEL*/
act.assignment_action_id
, act.assignment_id
, act.tax_unit_id
, as1.person_id
, as1.effective_start_date
, as1.primary_flag
FROM pay_payroll_actions pa1
, pay_population_ranges pop
, per_periods_of_service pos
, per_all_assignments_f as1
, pay_assignment_actions act
, pay_payroll_actions pa2
, pay_action_classifications pcl
, per_all_assignments_f as2
WHERE pa1.payroll_action_id = :b2
AND pa2.payroll_id = pa1.payroll_id
AND pa2.effective_date
BETWEEN pa1.start_date
AND pa1.effective_date
AND act.payroll_action_id = pa2.payroll_action_id
AND act.action_status IN ('C', 'S')
AND pcl.classification_name = :b3
AND pa2.consolidation_set_id = pa1.consolidation_set_id
AND pa2.action_type = pcl.action_type
AND nvl (pa2.future_process_mode, 'Y') = 'Y'
AND as1.assignment_id = act.assignment_id
AND pa1.effective_date
BETWEEN as1.effective_start_date
AND as1.effective_end_date
AND as2.assignment_id = act.assignment_id
AND pa2.effective_date
BETWEEN as2.effective_start_date
AND as2.effective_end_date
AND as2.payroll_id = as1.payroll_id
AND pos.period_of_service_id = as1.period_of_service_id
AND pop.payroll_action_id = :b2
AND pop.chunk_number = :b1
AND pos.person_id = pop.person_id
AND (
as1.payroll_id = pa1.payroll_id
OR pa1.payroll_id IS NULL
AND NOT EXISTS
SELECT /*+ PARALLEL*/ NULL
FROM pay_assignment_actions ac2
, pay_payroll_actions pa3
, pay_action_interlocks int
WHERE int.locked_action_id = act.assignment_action_id
AND ac2.assignment_action_id = int.locking_action_id
AND pa3.payroll_action_id = ac2.payroll_action_id
AND pa3.action_type IN ('P', 'U')
AND NOT EXISTS
SELECT /*+ PARALLEL*/
NULL
FROM per_all_assignments_f as3
, pay_assignment_actions ac3
WHERE :b4 = 'N'
AND ac3.payroll_action_id = pa2.payroll_action_id
AND ac3.action_status NOT IN ('C', 'S')
AND as3.assignment_id = ac3.assignment_id
AND pa2.effective_date
BETWEEN as3.effective_start_date
AND as3.effective_end_date
AND as3.person_id = as2.person_id
ORDER BY as1.person_id
, as1.primary_flag DESC
, as1.effective_start_date
, act.assignment_id
FOR UPDATE OF as1.assignment_id
, pos.period_of_service_id
Here is the execution plan for this query. We tried adding hints in sub-query to force indexes to pick-up but it is still doing Full table scans.
Suspecting some db parameter which is causing this issue.
In the
- Full table scans on tables in the first sub-query
PAY_PAYROLL_ACTIONS, PAY_ASSIGNMENT_ACTIONS, PAY_ACTION_INTERLOCKS
- Full table scans on tables in Second sub-query
PER_ALL_ASSIGNMENTS_F PAY_ASSIGNMENT_ACTIONS
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 29 398.80 2192.99 238706 4991924 2383 0
Fetch 1136 378.38 1921.39 0 4820511 0 1108
total 1166 777.19 4114.38 238706 9812435 2383 1108
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 41 (APPS) (recursive depth: 1)
Rows Execution Plan
0 SELECT STATEMENT MODE: ALL_ROWS
0 FOR UPDATE
0 PX COORDINATOR
0 PX SEND (QC (ORDER)) OF ':TQ10009' [:Q1009]
0 SORT (ORDER BY) [:Q1009]
0 PX RECEIVE [:Q1009]
0 PX SEND (RANGE) OF ':TQ10008' [:Q1008]
0 HASH JOIN (ANTI BUFFERED) [:Q1008]
0 PX RECEIVE [:Q1008]
0 PX SEND (HASH) OF ':TQ10006' [:Q1006]
0 BUFFER (SORT) [:Q1006]
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PER_ALL_ASSIGNMENTS_F' (TABLE) [:Q1006]
0 NESTED LOOPS [:Q1006]
0 NESTED LOOPS [:Q1006]
0 NESTED LOOPS [:Q1006]
0 HASH JOIN (ANTI) [:Q1006]
0 BUFFER (SORT) [:Q1006]
0 PX RECEIVE [:Q1006]
0 PX SEND (HASH) OF ':TQ10002'
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PAY_ASSIGNMENT_ACTIONS' (TABLE)
0 NESTED LOOPS
0 NESTED LOOPS
0 NESTED LOOPS
0 NESTED LOOPS
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PAY_PAYROLL_ACTIONS' (TABLE)
0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'PAY_PAYROLL_ACTIONS_PK' (INDEX (UNIQUE)
0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PAY_POPULATION_RANGES_N4' (INDEX)
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PER_PERIODS_OF_SERVICE' (TABLE)
0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PER_PERIODS_OF_SERVICE_N3' (INDEX)
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PER_ALL_ASSIGNMENTS_F' (TABLE)
0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PER_ASSIGNMENTS_N4' (INDEX)
0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PAY_ASSIGNMENT_ACTIONS_N51' (INDEX)
0 PX RECEIVE [:Q1006]
0 PX SEND (HASH) OF ':TQ10005' [:Q1005]
0 VIEW OF 'VW_SQ_1' (VIEW) [:Q1005]
0 HASH JOIN [:Q1005]
0 BUFFER (SORT) [:Q1005]
0 PX RECEIVE [:Q1005]
0 PX SEND (BROADCAST) OF ':TQ10000'
0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PAY_PAYROLL_ACTIONS' (TABLE)
0 HASH JOIN [:Q1005]
0 PX RECEIVE [:Q1005]
0 PX SEND (HASH) OF ':TQ10004' [:Q1004]
0 PX BLOCK (ITERATOR) [:Q1004]
0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PAY_ASSIGNMENT_ACTIONS' (TABLE) [:Q1004]
0 BUFFER (SORT) [:Q1005]
0 PX RECEIVE [:Q1005]
0 PX SEND (HASH) OF ':TQ10001'
0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PAY_ACTION_INTERLOCKS' (TABLE)
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PAY_PAYROLL_ACTIONS' (TABLE) [:Q1006]
0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'PAY_PAYROLL_ACTIONS_PK' (INDEX (UNIQUE)) [:Q1006]
0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'PAY_ACTION_CLASSIFICATIONS_PK' (INDEX (UNIQUE))[:Q1006]
0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PER_ASSIGNMENTS_F_PK' (INDEX (UNIQUE)) [:Q1006]
0 PX RECEIVE [:Q1008]
0 PX SEND (HASH) OF ':TQ10007' [:Q1007]
0 VIEW OF 'VW_SQ_2' (VIEW) [:Q1007]
0 FILTER [:Q1007]
0 HASH JOIN [:Q1007]
0 BUFFER (SORT) [:Q1007]
0 PX RECEIVE [:Q1007]
0 PX SEND (BROADCAST) OF ':TQ10003'
0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PER_ALL_ASSIGNMENTS_F' (TABLE)
0 PX BLOCK (ITERATOR) [:Q1007]
0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PAY_ASSIGNMENT_ACTIONS' (TABLE) [:Q1007]
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
enq: KO - fast object checkpoint 32 0.02 0.12
os thread startup 8 0.02 0.19
PX Deq: Join ACK 198 0.00 0.04
PX Deq Credit: send blkd 167116 1.95 1103.72
PX Deq Credit: need buffer 327389 1.95 266.30
PX Deq: Parse Reply 148 0.01 0.03
PX Deq: Execute Reply 11531 1.95 1901.50
PX qref latch 23060 0.00 0.60
db file sequential read 108199 0.17 22.11
db file scattered read 9272 0.19 51.74
PX Deq: Table Q qref 78 0.00 0.03
PX Deq: Signal ACK 1165 0.10 10.84
enq: PS - contention 73 0.00 0.00
reliable message 27 0.00 0.00
latch free 218 0.00 0.01
latch: session allocation 11 0.00 0.00
Thanks in advance
Suresh PVHi,
welcome,
how is the query performing if you delete all the hints for PARALLEL, because most of the waits are related to waits on Parallel.
Herald ten Dam
http://htendam.wordpress.com
PS. Use "{code}" for showing your code and explain plans, it looks nicer -
DECODE versus creating a separate table and doing a join
Hi,
This question is on what Oracle does internally for the decode function on a field value.
I have two options:
(i) use the decode function to do a quick and dirty SQL statement
(ii) create a table that has the results what that decode will do, and do a join with that table.
I tend to shy away from using decode in SQL, since I hate the idea of putting business information into an SQL statement, and also I thought using functions of any kind slowed SQL down, and that one should limit it as much as possible "standard" SQL.
Is the gain from using a separate table real, illusory or is it worse than using DECODE?
Thanks,
Regards,
SriniThanks for all the responses. I talked with my DBA and decided to go with the table option rather than the decode option. Here are the reasons.
1. Mixing business information in your program is never a good idea (this of course goes beyond just SQL/Oracle, applying to all programming). This is related to what you mentioned, Kevin. I don't want to restrict the names in my tables simply because I have hard-coded something into my program. Over a period of time, this would result in a minefield of decodes.
2. I forgot to mention - the decode would be in my 'where' clause - that is why I had the question on speed. It seemed to me that if the system had to translate each field, it could obviously not take advantage of any indexing the column had, and would be slower. Cartesian products are bad enough, we don't want a computation to be tacked on to it as well, right?
Regards,
Srini -
SSIS question - Email the results of the table in pipe delimited format to the users
I am new to SSIS and I have a requirement. I have a sales table that has transactions for all months. I am trying to automate the process which involves following steps:
I need to query the table based on previous month.
Email the results of the table in pipe delimited format to the users.
What I am currently doing:
I created a temp table and load the query results in it every month by truncating the previous month’s data. Saved the query results in excel format. Open the excel and save it in csv format. Use SSIS package to convert csv to txt format and email
them manually with the txt file name as “Salesresults_<previousmonth>.
I just want to automate this process but the main challenge comes up with email with txt attached. Please help me out.First create a SSIS variable(@[User::Path]) that store folder path as "C:\Test".
Select the "Expression" tab in Send Mail Task Editor and select the appropriate property(FileAttachments) and assign the below expression.
@[User::Path] + "\\Salesresults_" + (DT_WSTR, 10) MONTH( GETDATE() ) + (DT_WSTR, 10) YEAR( GETDATE() ) + ".txt"
Regards, RSingh -
Will there performance improvement over separate tables vs single table with multiple partitions? Is advisable to have separate tables than having a single big table with partitions? Can we expect same performance having single big table with partitions? What is the recommendation approach in HANA?
Suren,
first off a friendly reminder: SCN is a public forum and for you as an SAP employee there are multiple internal forums/communities/JAM groups available. You may want to consider this.
Concerning your question:
You didn't tell us what you want to do with your table or your set of tables.
As tables are not only storage units but usually bear semantics - read: if data is stored in one table it means something else than the same data in a different table - partitioned tables cannot simply be substituted by multiple tables.
Looked at it on a storage technology level, table partitions are practically the same as tables. Each partition has got its own delta store & can be loaded and displaced to/from memory independent from the others.
Generally speaking there shouldn't be too many performance differences between a partitioned table and multiple tables.
However, when dealing with partitioned tables, the additional step of determining the partition to work on is always required. If computing the result of the partitioning function takes a major share in your total runtime (which is unlikely) then partitioned tables could have a negative performance impact.
Having said this: as with all performance related questions, to get a conclusive answer you need to measure the times required for both alternatives.
- Lars -
Howto: Save prediction query results to relational table
I believe saving prediction query results to relational tables is possible (the BI studio does it!). I am not clear on how to do this w/o the BI studio, which means if I write a DMX query and want to store its output to a relational table, how do I do it?
Tips, anyone?
Thanks!a) You can write some code do this on the client-side. Use ADOMD.NET in your C# app to execute the DMX query and fetch a data reader, open up another connection to your relational database and write rows of data to the second connection as you read them from the first.
b) You can create a linked server to your Analysis Server instance in your SQL Server relational server instance and then execute a "SELECT * INTO <newtable> FROM OPENQUERY(<linkedserver>, <DMX query>)" T-SQL statement from your relational database connection. -
Not saving the data in two tables
Hello,
its my production problem, i have an update form where you can update the records and these
records will sit in the temp tables until the final approval from the supervisor.
In this update form i have two table where i am saving the data one is dup_emp to save the
officer data and another is the dup_address to save the officer where he worked data.
in this form address form is pop up screen where you can update and gets back to the original
form where you can see all the other fields. my problem is if a user hit the cancel button on
address form example the user doesnt want to update any information on that screen so user
cancel that screen, and comes to the other screen where the user makes the changes to the
appropriate fields and hits the SAVE button. in this case its saving only to the dup_emp table
data not the address data from the address form to dup_address table for the same record.
if the user cancels in both the screens cancel button it should delete the record from both the
tables but cancel in form and saves in another form it should save the record in both the
tables.
here is my code from both cancel buttons from both the forms.
this is code is from address form cancel button.
delete from dup_address
where address_id=:address_id
and parent_table_name='emp';
commit;
CLEAR_BLOCK;
go_block('DUP_EMP');
This code is from dup form of the cancel button
declare
temp_address_id varchar2 (12);
begin
delete from dup_emp
where secondemp_id =:dup_emp.secondemp_id;
delete from dup_address
where parent_t_id=:global.secondemp
and parent_table_name='emp';
commit;
clear_block;
go_block('secondaryemp');
END;Hi,
As Aravind mentioned, it's nothing related to workflow. You have to find a BADI in tcode PA30 that could be used after the infotype is updated. So, you can use FM SAVE_TEXT.
Regards, -
How to use a FILTER in a normal table in ABAP WEB DYNPRO
Hi Experts,
I need to Filter my table in UI using the 'onFilter' event,
BUT I want the first row of the table to be my INPUTS to do the filtering - JUST LIKE IN ALV TABLE,
since i dont want to use an ALV table, i want the same to be done in a transparent table...Dear pramodh,
Here u can achieve filter option in transparent table by creating a button(Toggle) in table.
1.Now you need to apply filter in OnToggle event.
wd_this->table_control->apply_filter( ).
2.when the Filter button is pressed IS_FILTER_ON attribute will turn ON and FILTER will be set.
And Automatically the First row is set for INPUTS.
The Following Code is required to get Handler for Table and also to SET FILTER.
method WDDOMODIFYVIEW .
DATA: wd_table TYPE REF TO cl_wd_table,
w_is_filter_on TYPE wdy_boolean.
wd_context->get_attribute( EXPORTING name = 'IS_FILTER_ON'
IMPORTING value = w_is_filter_on ).
wd_table ?= view->get_element( '<give ur table ID>' ).
wd_this->table_control ?= wd_table->_method_handler.
IF w_is_filter_on = abap_true.
wd_table->set_on_filter( 'FILTER' ).
else.
wd_table->set_on_filter( '' ).
ENDIF.
endmethod.
I believe u know about Table Handler. And i can help if u need.
Thanks & Regards,
Rakesh Vanamala. -
I am using Lightroom 5 and am hapy about it, but recently I hav encountered a problem when trying to edit an image in another program, i.e. Elements. I used the clone tool, saved the result and went back to Lightroom to open the image there. I did find a second copy, but this is identic with the one I already had in Lightroom, and I can see noe result of the clone process.
At the moment I would say it is uncertain whether Acrobat is completely ignoring the change, or whether the changes you are making aren't affecting the ink density. I have a suggestion for a test you could make to see which is the case.
When editing in Photoshop just add a box or mark on the image, for your tests.
- If this box does not appear in Acrobat, you know Acrobat is not seeing the edit
- If this box does appear in Acrobat, you know the problem is that your tool for changing ink densities is not taking effect. -
Get ALV Report's Result Into Internal Table
Hello everyone,
is there a way to get a alv reports result into an internal table?
For example: is it possible to get the values from transaction S_ALR_87012078.
I mean i want to write a function an it will give you the alv reports result as internal table, so you won't have to dig the reports code in.
thanks to any answers from now. bye.Hi
This is a wrapper program which will execute the standard program and capture the output as we are using list to memory
Then after we will be using LIST_FROM_MEMORY and 'LIST_TO_ASCI'
submit rhrhaz00 exporting list to memory
and return
with pchplvar = pchplvar
with pchotype = pchotype
with pchobjid in pchobjid
with pchsobid in pchsobid
with pchseark = pchseark
with pchostat = pchostat
with pchistat = pchistat
with pchztr_d = pchztr_d
with pchztr_a = pchztr_a
with pchztr_z = pchztr_z
with pchztr_m = pchztr_m
with pchztr_p = pchztr_p
with pchztr_y = pchztr_y
with pchztr_f = pchztr_f
with pchobeg = pchobeg
with pchoend = pchoend
with pchtimed = pchtimed
with pchbegda = pchbegda
with pchendda = pchendda
* with pchwegid = pchwegid
* with pchsvect = pchsvect
* with pchdepth = pchdepth
with infty in infty
with subty in subty
with vdata = vdata
with info = info.
data: list like abaplist occurs 0 with header line.
data: txtlines(1024) type c occurs 0 with header line.
clear list.
refresh list.
clear tbl_reportlines.
refresh tbl_reportlines.
call function 'LIST_FROM_MEMORY'
tables
listobject = list
exceptions
not_found = 1
others = 2.
if sy-subrc <> 0.
message id sy-msgid type sy-msgty number sy-msgno
with sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
endif.
call function 'LIST_TO_ASCI'
* EXPORTING
* LIST_INDEX = -1
* WITH_LINE_BREAK = ' '
* IMPORTING
* LIST_STRING_ASCII =
* LIST_DYN_ASCII =
tables
listasci = txtlines
listobject = list
exceptions
empty_list = 1
list_index_invalid = 2
others = 3
if sy-subrc <> 0.
message id sy-msgid type sy-msgty number sy-msgno
with sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
else.
tbl_reportlines[] = txtlines[].
call function 'LIST_FREE_MEMORY'.
endif.
Here is the simple example program
*Example Code (Retrieving list from memory)
DATA BEGIN OF itab_list OCCURS 0.
INCLUDE STRUCTURE abaplist.
DATA END OF itab_list.
DATA: BEGIN OF vlist OCCURS 0,
filler1(01) TYPE c,
field1(06) TYPE c,
filler(08) TYPE c,
field2(10) TYPE c,
filler3(01) TYPE c,
field3(10) TYPE c,
filler4(01) TYPE c,
field4(3) TYPE c,
filler5(02) TYPE c,
field5(15) TYPE c,
filler6(02) TYPE c,
field6(30) TYPE c,
filler7(43) TYPE c,
field7(10) TYPE c,
END OF vlist.
SUBMIT zreport EXPORTING LIST TO MEMORY.
CALL FUNCTION 'LIST_FROM_MEMORY'
TABLES
listobject = itab_list
EXCEPTIONS
not_found = 4
OTHERS = 8.
CALL FUNCTION 'LIST_TO_ASCI'
EXPORTING
list_index = -1
TABLES
listasci = vlist
listobject = itab_list
EXCEPTIONS
empty_list = 1
list_index_invalid = 2
OTHERS = 3.
IF sy-subrc NE '0'.
WRITE:/ 'LIST_TO_ASCI error !! ', sy-subrc.
ENDIF.
Re: alv to internal table -
Filter Key problem in table without ALV
HI,
I have two tabs in my view. In Each tab I have a table which are binded to different nodes.
Now i am applying FILTER for these two tables using Toolbar UI Element. I have created a ToolBar Toggle button inside this.
In the action (DO_FILTER)of this button i am using the following code.
IF table_1.
in any case clear the table's filter strings
l_node = wd_context->get_child_node( if_v_1=>wdctx_n_node1 ).
l_node->invalidate( ).
if the filter is off: show the whole shebang
wd_this->table_method_hndl->apply_filter( ).
ELSEIF table_2.
in any case clear the table's filter strings
l_node = wd_context->get_child_node( if_v_2=>wdctx_n_node2 ).
l_node->invalidate( ).
if the filter is off: show the whole shebang
wd_this->table_method_hndl->apply_filter( ).
ENDIF.
In the modify view i am calling this action 'DO_FILTER' for applying my filter option. Now when i run my program the filter keys are working fine.
I am having the problem for the belowing scenario.
1. In tab1 i have 5 rows in table. I apply filter and getting 2 rows in table.
2. Now i use select tab to go to tab2 and there i have 6 rows.
3. Now again i go back to tab1. My problem comes here.
Now it is only displaying 2 rows in table1. (The first two rows in the total no. of 5 rows). (Two rows because in filter only 2 rows where displayed.)
Note: In the select tab when i debug i am able to notice that the internal table which is binding to table1 is having 5 rows and not 2 rows.
My modify code has the following code.
IF lv_a_sel_tab = 'TAB1'.
Get reference of the table view element
l_table ?= view->get_element( 'TABLE1' ).
Get reference to the Filter & Sorting API
wd_this->table_method_hndl ?= l_table->_method_handler.
Set or cancel the table's filter action
IF lv_a_checked = abap_true.
l_table->set_on_filter( 'DO_FILTER' ).
ELSE.
l_table->set_on_filter( ' ' ).
ENDIF.
ELSEIF lv_a_sel_tab = 'TAB2'.
Get reference of the table view element
l_table ?= view->get_element( 'TABLE2').
Get reference to the Filter & Sorting API
wd_this->table_method_hndl ?= l_table->_method_handler.
Set or cancel the table's filter action
IF lv_a_checked = abap_true.
l_table->set_on_filter( 'DO_FILTER' ).
ELSE.
l_table->set_on_filter( ' ' ).
ENDIF.
ENDIF.
Please tell me how to make these deactivation rows active, so that all rows will be displayed when selecting the tab.
Regards,
Delphi.ommit
-
Interactive Report - Saved Filter
Have an Interactive Report which will include a column which containing the email address of an employee and would like to create a saved filter on the report to filter on this column and reference the currently logged in users username.
Owner =APEX_CUSTOM_AUTH.GET_USERNAME
Not sure of the syntax to reference the user id of the active user in the saved interactive report. I have done something similar with classic reports by storing the user_name in a page item and referencing the page item in the SQL. This is a little different with Interactive report as I want to provide a series of Saved Filter users can pick from to filter the date in various ways; including seeing only 'their data'.
JasonJason,
You could do this in the query:
SELECT ...
CASE WHEN UPPER(your_table.username) = :APP_USER THEN 'Y' ELSE 'N' END user_match
Then put a filter on user_match = 'Y'
regards,
Malcolm.
Maybe you are looking for
-
I have Power Mac G5-Mac OS X version 10.5.4 and has just upgraded it to version 10.8 I there anyway I can upgrade it Intel Mac so that I can install windows and Mac. nana
-
Security question about opening PDFs in Reader as opposed to Standard or Professional
Our ITC wants us to open all downloaded PDFs in Acrobat Reader 9 as opposed to Acrobat 7.0 Standard. We currently have both. We download a lot of PDFs and frequently need to Reduce File Size or extract pages, etc. I am hoping to upgrade my Acrobat 7.
-
Adobe Reader/Preview pdf-to-Word cut and paste problems...
Hi, I often cut and paste from pdfs of documents where the text is in two columns, to Word documents...and without fail, I get a jumbled-up mix of BOTH columns of text that I have to mess with, deleting bits I don't want, squishing-up the text I do,
-
'''I cant download any file...'''
-
Creating Views on multiple entity-objects
Dear Forum, Is it possible to make associations and lookups based on multiple entity-objects. I made a custom view, say view1 based on two entity-objects, say entity1 and entity2. I used some fields from entity1 and some from entity2. There's a 1 to