Row combination fox
Hello Friends,
Could you suggest fox coding for the following:
1. read all combinations of characteristics B and C for each values of characteristic A where the relationship is like:
A-B--C
B1---C1
C2
2. If key figure value for these combinations is certain amount, then I need to perform an action.
Thanks and regards
Sanjay Patel
Hi,
Mark all 3 chars A, B and C as 'to be changed'.
Code:
data charA type A.
data charB type B.
data charC type C.
Foreach charA, charB, charC.
IF charB = _______ and charC = ________ .
< ACTION TO BE PERFORMED >
ENDIF.
endfor.
Hope this helps.
Similar Messages
-
Front Row combining iPhoto albums when it shouldn't
So my iPhoto library is organised with plenty of albums, smart albums and folder trees.
I have a folder tree for pictures of my art and another for pictures of other people's art.
When I have iPhoto display a slideshow there's no problems (so far). Yet when I have Front Row display the 'Lego' album in my friend Lucy's art folder, it displays every image in every album titled 'Lego', regardless of which folder tree those albums live in.
I believe I have Front Row 1.3.1; I'm running Tiger and haven't declined any updates. I've tried repairing permissions to no avail.Yet when I have Front Row display the 'Lego' album in my friend Lucy's art folder, it displays every image in every album titled 'Lego', regardless of which folder tree those albums live in.
Is the "Lego" album in Lucy's folder a "smart" album? If so, I would be curious to know what happens if you change the rules for that folder, run Front Row to test and then change the rules back. It sounds like a smart albums that isn't following the settings correctly.
-Doug -
Combine 2 rows into single row?
I have a table A which has information related to a process. For process completion there exist 2 rows. One has in it the total elapsed time, the time the entire process (which is multipart) begin and end time, but the columns related to rows processed are blank. Another related row has a start, end and elapsed time in it -- which I don't want -- but it has the row counts that I do want.
I want to take these 2 rows, combine the relevant information into 1 row and insert that row into table B.
I know I could insert from the first row and then come back and update it from the second row, but I hate having to read Table A twice. Any suggestions?Hello
Is it not just a matter of using group by with sum? I may well have missed an important detail but here's a starting point:
SQL> CREATE TABLE DT_TEST_PROCESS
2 ( id number,
3 stage number,
4 rows_processed number,
5 elapsed number
6 )
7 /
Table created.
SQL>
SQL> INSERT INTO dt_test_process
2 VALUES(1,1,100,0)
3 /
1 row created.
SQL> INSERT INTO dt_test_process
2 VALUES(1,2,0,10)
3 /
1 row created.
SQL> INSERT INTO dt_test_process
2 VALUES(2,1,1000,0)
3 /
1 row created.
SQL> INSERT INTO dt_test_process
2 VALUES(2,2,0,20)
3 /
1 row created.
SQL>
SQL> INSERT INTO dt_test_process
2 VALUES(3,1,500,0)
3 /
1 row created.
SQL> INSERT INTO dt_test_process
2 VALUES(3,2,0,30)
3 /
1 row created.
SQL>
SQL> SELECT
2 id,
3 SUM(rows_processed) total_rows,
4 SUM(elapsed) total_elapsed
5 FROM
6 dt_test_process
7 GROUP BY
8 id
9 /
ID TOTAL_ROWS TOTAL_ELAPSED
1 100 10
2 1000 20
3 500 30
SQL>
SQL> CREATE TABLE dt_test_process_sum AS
2 SELECT
3 id,
4 SUM(rows_processed) total_rows,
5 SUM(elapsed) total_elapsed
6 FROM
7 dt_test_process
8 GROUP BY
9 id
10 /
Table created.HTH
David -
Pagination disappears after setting 'max row count'
I have a report with app. 4500 records (with the default of 15 lines per page) and a pagination scheme of 'Row Ranges 1-15 16-30 in select list (with pagination)'.
When running the report, all is working fine and i get the information in the select list '.. of more than 500'.
Then i inserted some values in the 'max row count'-field:
a value of 2000: it's working as expected
a value of 4000 (or more): the pagination disappears and won't be desplayed :-(
Any ideas, what i have done wrong?
Thanks
RainerHi
This problem - select list dissapearing when > 32k still persists in Version 3.
A neat workaround is to use a page item to set the number of rows disaplyed.
Where there is a large number of rows in the table and the max rows is also large, simply set the page item value for the number of rows to a larger value say 200 - then - if the select list now becomes less than 32K the select list will be shown.
It's a matter of balancing the number of rows shown on the page with the total number of rows in the table / max rows combination.
It's not perfect but it works.
Hope this helps.
Mike Mac
Message was edited by:
Mike Mac -
BOXIR2: Use of Minus Combined Query - DeskI vs. WebI
We have a BOXIR2 DeskI report that is a very simple combined query using MINUS. Creating the exact same combined query in WebI displays a significantly higher row count in the report table. However, both report formats bring back the exact same number of rows into the cube. It as is WebI is not evaluating the MINUS at all.
Anyone have issues with MINUS combined queries when creating WebI docs vs. DeskI?
Edited by: Kevin Smolkowicz on Feb 11, 2009 8:55 PMThanks Sarbhjeet!
However, I am trying to get my head around how the 2 report formates handle the data.
In DeskI, the combined query brings all the data back from the database and then performs the MINUS comparison at the application level? I think this based on...
DESKI
Combined Query: Returns back 38,457 rows to the data cube
Combined Query #1 (run by itself): Returns 37,927 rows
Combined Query #2 (run by itself): Returns 530 rows
The report displays the 13 distinct rows that satisfy the MINUS rules
WEBI: With 'Retrieve duplicate rows unchecked
Combined Query: Returns only the 107 rows that satisfy the MINUS rules and correctly displays the 13 distinct rows in the report
WEBI: With 'Retrieve duplicate rows checked
Combined Query: Returns back 38,457 rows to the data cube
Combined Query #1 (run by itself): Returns 37,927 rows
Combined Query #2 (run by itself): Returns 530 rows
The report displays the distinct rows of the 38,457 some of which do NOT satisfy the MINUS. It is like it is not evaluating at all.
Anyway, the resolution worked...just trying to understand the mechanics behind it a bit.
Thanks again. -
How do I use the context of the column as part of my calculated measure/column?
Hello,
I've struggled with this for a week now.
I need my calculation to take the context of the column that it's in and use that value as part of a lookup/filter with which to do an operation.
I basically want to add up the "Value" of all the "ORUs" associated to a Manager(MGR). But since the ORU's are spread across many managers, I need to calculate the Value
for an "ORU" and multiply by the "Percent" ( of people that who belong to that ORU for that MGR). The difficulty is that the MGR(and associated Percent) is one table which is connected via the ORU. There's no way connection
to the MGR except by the Column Context. So, how do I get to it for the calculation?
I'd like: CALCULATE(SUM(ORUBUMGR[Percent], FILTER (ORUBUMGR, ORUBUMR[MGR]={This column's header value!})
pseudo-code would be:
For each cell in the Pivot table
SUM
For each ORU
Percent= Look up the multiplier for that ORU associated with the Column
SUMofValue = Add all of the Values associated with that Column/Row combination
Multiply Percent * SUMofValue
What won't work is: CALCULATE(SUM(ORUBUMGR[Percent]), ORUMAP)*CALCULATE(SUM(Charges[Value]), ORUMAP)
because you're doing a sum of all the Percentages instead of the sum of the Percentages which are only associated with MGR (i.e., the column context)
The alternatives I can think of is doing a cross join between the tables so you end up with a copy of "Charges" for every row in ORUBUMGR...but while that would work for the example, it
won't scale as my actual Charge table is quite large. It'll also end up being quite complicated. Or, having a column for each MGR where the lookup to the Percent is hardcoded into the calculation...but that also seems complicated and won't scale
when I want to do future changes.
I'd appreciate any assistance...even a "that's impossible".
Kind Regards,
Alex
Link to the XLS on onedrive
_https://onedrive.live.com/redir?resid=67E698690736D027!3358&authkey=!ADU5F41Sg08Ygm0&ithint=file%2cxlsx
Link to image showing data and what the pivot answer should look like _http://i.imgur.com/2HC7Lk7.png
Link to image showing the data model _http://i.imgur.com/KK3b61v.png
(I'm waiting to be verified..)Yes, you're right, my calculation first attempt only works if the ORU and MGR are on the rows. :(
I came up with a 2 step calc which is similar to the final calc posted by ImkeF although maybe a little cleaner. It uses a many to many pattern to calculate the ChargeAmt and then a SUMX to iterate over the ORUBUMGR table and apply the percentages.
ChargeAmt:=CALCULATE(SUM(Charges[Value]),ORUMAP)
Amt2:=SUMX(ORUBUMGR, ORUBUMGR[Percent] * [ChargeAmt])
You can roll these into a single measure, you just have to wrap the expression from [ChargeAmt] in another calculate so that it does both the many to many calc and the transition from filter context to row context
Amt3:=SUMX(ORUBUMGR, ORUBUMGR[Percent] *CALCULATE(CALCULATE(SUM(Charges[Value]),ORUMAP)))
Personally I think I'd go with the two measure approach (even if you end up hiding [ChargeAmt]) as I think it's easier to read and will be easier to maintain.
http://darren.gosbell.com - please mark correct answers -
How to lookup a value in a 2D array from an Excel file - Dasylab version 12 Pro
Hi, I am new to this forum and am looking for some advice on a worksheet I'm trying to construct. I have an Excel spreadsheet that is basically a 2D array, and I want to use Dasylab v12 Pro to be able to import a value from this array based on user selections from within the Dasylab worksheet.
Column A lists 200+ diesel engine models. I've shown 9 in my Excel attachment...one model per row (shaded in yellow), and each engine model can be run at several speeds (shaded green in columns B thru F). For an engine in a given row, combined with a chosen operating speed gives you a corresponding horsepower (blue shading).
I have this Excel sheet saved somewhere on my C:/ drive, and what I want to do is when the user starts the Dasylab worksheet he will select from a drop drown menu to choose the engine model (ie A, B, C, etc) and another drop down menu to choose the speed (ie 1470, 1760, etc). I know that I can make a drop down menu with a Coded Switch within Dasylab, however it seems only 16 choices can be made from each switch, so for my 200 engine models I will need 13 switches! I know I can assign a text description like "Engine A" to a numerical value within that coded switch. Somehow I need to take those two selections made within the Dasylab experiment, and read this Excel file (ie my database of all of these 200 diesel engine models) as a 2 dimensional array by row and column to spit out the data value (the blue numbers) back into Dasylab.
The goal is to take the engine model, speed, and the horsepower obtained from the array search and write these to an .asc file that I will create a running log of this data. So, after the test page is run 50 times it will have 50 rows of data containing these 3 parameters. There is some other test data taken from my data acquisition that goes along with this, however that's not part of my 2D array predicament.
I'm taking a guess that I need to do something with global strings & variables, and some how import them and export them with an ODBC in/out module. This is something I've just never worked with before so I am a bit lost. Obviously I can just make the user type in the engine model and speed as a startup parameter at the start of the test and save that to a variable or string, but I want to make it idiot proof so that the two selections (ie row and column) can be chosen from a pre-set list and will yield my data value. Everything else related to Dasylab I am pretty proficient with.
Thanks,
Mike
Attachments:
engine 2D array.xlsx 10 KBThis would be the best way.
Also, with version 13 they started using Phyton to create custom modules that can be programmed in DASYLab.
We arte learning this right now and I know that you can use standard message dialogs with that as well.
I would suggest to you to download a demo of V13 and take a look at the Pyton module.
Also, usually DASYLab system intgretors like us, can provide services also on things like this including Excel programming for pre and post analisys
Tom Rizzo
InSyS Corp.
www.insyscorp.com
Your DASYLab integrator -
Oem explain plan produced doesn't correspond to explain plan with tkprof
Hello all,
I am running OEM on a 10.2.0.3 database and I have a very slow query with cognos 8 BI on my data warehouse.
I went to the dbconsole through OEM and connected to target database I went to look at the query execution and then got an explain plan.
Then I did a trace file and ran it throught tkprof.
When I look at both produced explain plan, I can see the tree looks the same exept the corresponding values. In OEM, it says I am going throug 18000 rows and in the tkprof result it says more like millions of records.
As anybody had these kind of results?
Shall I have more confidence in TKprof then OEM?
It is very important to me since I am being chalanged by an external DBA.I would recommend you to get Christian Antogini´s book "Troublshooting Oracle Performance", (http://www.antognini.ch/top/) which explains everything you need to know when analyzing Oracle SQL Performance and Explain Plans.
If you properly trace your SQL Statement, you will get "STAT" lines in the trace file. These stat lines show you the actual number of rows processed per row source operation. Explain plan per default does only show you the "estimated" row counts for the row source operations no matter whether you use "explain=" in the tkprof call or OEM to get the explain plan. Tkprof reads the STAT lines from the trace and outputs a section which is similar to an execution plan but contains the "real" number of rows.
However, if you want to troubleshoot Oracle Performance, I would recommend you to run the statement with the hint /*+ GATHER_PLAN_STATISTICS */ or set statistics_level=ALL in your session (not database-wide!).
If you do, you can get an excellent execution plan with DBMS_XPLAN.DISPLAY_CURSOR containing both estimated rows as well as actual rows combined with the "number of starts" for e.g. nested loop joins.
Get the book, read it, and you will be able to discuss this issue with your external dba in a professional way. ;-)
Regards,
Martin
www.ora-solutions.net -
Why oracle spatial query execute so slow???
hi all,
I have two oracle spatial table CHI_2007r2 and CHI_2008r2, each table has it's own spatial index,and each table has 2000 row record, Now execute this query,I can get the result soon,
select /*+ ORDERED */ a.link_id from chi_2007r2 a,chi_2008r2 b where a.link_id=b.link_id and sdo_relate(a.geom,b.geom,'mask=INSIDE querytype=WINDOW')='TRUE';
But I execute the query by geom only,it will take so long time! query(3 more hours):
select /*+ ORDERED */ a.link_id,b.link_id from chi_2007r2 a,chi_2008r2 b where sdo_relate(a.geom,b.geom,'mask=INSIDE querytype=WINDOW')='TRUE';
I don't upderstand...
thanks
DavidBecause in the first statement
select /*+ ORDERED */ a.link_id from chi_2007r2 a,chi_2008r2 b where a.link_id=b.link_id and sdo_relate(a.geom,b.geom,'mask=INSIDE querytype=WINDOW')='TRUE'; you are joining the two tables while in the second statement
select /*+ ORDERED */ a.link_id,b.link_id from chi_2007r2 a,chi_2008r2 b where sdo_relate(a.geom,b.geom,'mask=INSIDE querytype=WINDOW')='TRUE';you are doing a cartesian merge first because there is no join between a and b other than te sdo_relate which will be calculated for every row combination you get.
But I think you'd be better off posting in the {forum:id=76} forum.
Best regards,
PP
Edited by: porzer on Jan 15, 2009 10:34 AM -
A little help writing an SQL please? May need some advanced functionality..
Hi All
I have arequirement to write an SQL that, given a total, and some rows from a table, can return the rows where one of the columns adds up to the total
These are transactions, so I'll boil it down to:
TranID, TranAmount
1, 100
2, 200
3, 250
4, 50
Suppose I will know that I need to seek 2 transactions totalling 300.
Possible combinations are ID 1+2, or 3+4. In this case, it is an error that I cannot find a single set of transactions that satisfies the number-of and the sum-of requirements
Suppose I will know that I need to seek 3 transactions totalling 550.
Only combination is ID 1+2+3
The difficulty for me here comes in that the number of transactions hunted will change, possibly without limits but as there will be a factorial element in terms of number of combinations, imposing an upper limit would be sensible.
I considered one solution within my understanding of SQL, that I can take my rows, and (supposing I enforce a limit of 3.. i.e. I cannot look for any more than a combination of 3 transactions) cross join them to themselves on condition that no tran ID is equal to any other tran ID, I will then get a huge block of cartesian joined transactions. I can then use a where clause to pick out row combinations having the correct sum. If I can use some kind of ID > otherID in my join, then I can make the cartesian a triangle shape which hopefully would prevent essentially duplicated rows of 1,2,3 and 1,3,2 and 3,2,1 and 3,1,2 etc
If I was only looking for 2 or 1 possible combinations, I would replace the tran amounts with 0 using a case when (because the number of times I'll cross join is fixed; I dont want to get into executing a dynamic sql)
Lastly I should point out that I'm doing this in PL/SQL or possibly Java so I have the ability to introduce custom logic if needs be
I would love to hear any other input from the wise and wizened members here as to how they might approach this problem.. Maybe oracle has some cool analytical functionality that I do not know of that will help here?
Thanks in advance guys n girls!>
Now, as I'll be looking to update another columns on these transactions so that it can be picked up by an external process, can I modify this query so that it produces a list of TranIDs so I can say:
UPDATE tran SET falg='Y' WHERE tranID IN (<the query>)
>
Well if there's a workaround for the NOCYCLE, then the following will give all the transactions as rows for a particular total...
SQL> ed
Wrote file afiedt.buf
1 with t as (select 1 as TranID, 100 as TranAmount from dual union all
2 select 2, 200 from dual union all
3 select 3, 250 from dual union all
4 select 4, 50 from dual)
5 -- end of test data
6 select distinct substr( trans
7 , decode( level, 1, 1, instr(trans,'+',1,level-1)+1)
8 , decode( instr(trans,'+',1,level), 0, length(trans), instr(trans,'+',1,level) - decode( level, 1, 0, instr(trans,'+',1,level-1))-1)
9 ) the_value
10 from (select trans
11 from (
12 select ltrim(rtrim(comb.trans,'+'),'+') as trans, sum(case when instr(comb.trans,'+'||t.tranid||'+')>0 then tranamount else null end) as sum_tran
13 from (select sys_connect_by_path(tranid, '+')||'+' as trans
14 from t
15 connect by nocycle tranid > prior tranid) comb
16 ,t
17 group by comb.trans
18 )
19 where sum_tran = 550)
20 connect by level < length(replace(translate(trans,'01234567890','00000000000'),'0')) + 2
21* order by 1
SQL> /
THE_VALUE
1
2
3
SQL> ed
Wrote file afiedt.buf
1 with t as (select 1 as TranID, 100 as TranAmount from dual union all
2 select 2, 200 from dual union all
3 select 3, 250 from dual union all
4 select 4, 50 from dual)
5 -- end of test data
6 select distinct substr( trans
7 , decode( level, 1, 1, instr(trans,'+',1,level-1)+1)
8 , decode( instr(trans,'+',1,level), 0, length(trans), instr(trans,'+',1,level) - decode( level, 1, 0, instr(trans,'+',1,level-1))-1)
9 ) the_value
10 from (select trans
11 from (
12 select ltrim(rtrim(comb.trans,'+'),'+') as trans, sum(case when instr(comb.trans,'+'||t.tranid||'+')>0 then tranamount else null end) as sum_tran
13 from (select sys_connect_by_path(tranid, '+')||'+' as trans
14 from t
15 connect by nocycle tranid > prior tranid) comb
16 ,t
17 group by comb.trans
18 )
19 where sum_tran = 300)
20 connect by level < length(replace(translate(trans,'01234567890','00000000000'),'0')) + 2
21* order by 1
SQL> /
THE_VALUE
1
2
3
4
SQL> -
AT end of... some issues in using them
Hi ALL,
I am having a req where i have to calculate the no of empty bin for each 3 and 4 row combination.... and each combination will be moved to flat file.
so i tried to use AT end of ( field) ....
as below:
loop at it_lagp into wa_lagp.
move wa_lagp-lgnum to wa_final-lgnum.
timestamp
move wa_lagp-lgber to wa_final-lgber.
move wa_lagp-lptyp to wa_final-lptyp.
if wa_lagp-lgber is not INITIAL and wa_lagp-lptyp is INITIAL.
at end of lgber.
cnt = cnt + 1.
move cnt to wa_final-count.
append wa_final to it_final.
clear : cnt.
endat.
ELSEIF wa_lagp-lgber is INITIAL and wa_lagp-lptyp is not INITIAL.
at end of lptyp.
cnt = cnt + 1.
move cnt to wa_final-count.
append wa_final to it_final.
clear : cnt.
endat.
else.
at end of lgber.
at end of lptyp.
cnt = cnt + 1.
move cnt to wa_final-count.
append wa_final to it_final.
clear : cnt.
endat.
endat.
endif.
endloop.
The prblm is at the bolded ones... as we see 3rd row is having 0001 which ll not end so the statement is not working for me.. i want taht it should consider both 3rd(lgber) and 4th (lptyp) row... and count..
1 5051
2 5051 0001
3 5051 0001
4 5051 0001
5 5051 0001
6 5051 0001 E060
7 5051 0001 E075
8 5051 0001 E075
9 5051 0001 E090
10 5051 0001 E090
11 5051 0001 E090
12 5051 0001 E090
13 5051 0001 E090
14 5051 0001 E090Hi,
Write AT END on the last field ...
Ex: ITAB is an internal table with the given data ...
itab-a = '5051'.
append itab.
clear itab.
itab-a = '5051'.
itab-b = '0001'.
append itab.
append itab.
append itab.
append itab.
clear itab.
itab-a = '5051'.
itab-b = '0001'.
itab-c = 'E060'.
append itab.
clear itab.
itab-a = '5051'.
itab-b = '0001'.
itab-c = 'E075'.
append itab.
append itab.
clear itab.
itab-a = '5051'.
itab-b = '0001'.
itab-c = 'E090'.
append itab.
append itab.
append itab.
append itab.
append itab.
append itab.
clear itab.
*loop thru itab as ....
loop at itab.
v_count = v_count + 1.
at end of c.
read table itab index sy-tabix.
itab-d = v_count.
modify itab index sy-tabix. <-- pass the values to another internal table
clear v_count.
endat.
endloop.
loop at itab.
write :/1 itab-a,
10 itab-b,
20 itab-c,
30 itab-d.
endloop.
output from the above:
5051 1
5051 0001
5051 0001
5051 0001
5051 0001 4
5051 0001 E060 1
5051 0001 E075
5051 0001 E075 2
5051 0001 E090
5051 0001 E090
5051 0001 E090
5051 0001 E090
5051 0001 E090
5051 0001 E090 6
Regards,
Srini. -
Algorithm to count frequencies
i have a flat file of data, where rows are records and columns are variables. the variables are X1, X2, ..., XN. i need a data structure and/or algorithm to count the frequencies of this data. for example, i need to query how many times X1=true and X2=false, or X3=false and X2=false and XN=true, etc... i could easily put this data into a database (which i have). however, i find (using the profiler in Eclipse TPTP) that database calls are the most time-consuming calls. is there a data structure and/or algorithm to handle frequency counts?
i know that the .NET framework has a DataTable data structure, and it can be used like an in-memory database where one can filter quickly. is there a corresponding data structure/object in Java or any Java projects anywhere?
thanks.jakester wrote:
If this is not what you had in mind, a better explanation would be in order.the format of my data is easy to understand. let's say you have N categorical variables, denoted as X1, X2, ..., XN. if N=3 and the variables are binary (each variable has 2 values), then the data set looks like the following.
X1, X2, X3
true, false, true
true, false, false
false, false, false
so, it's a CSV (comma-separated value) file.
now, i can load this into a table in a database. i can filter
1. select count(*) from theTable where x1='true'
2. select count(*) from theTable where x1='true' and x2='true'
in fact, this is what i do currently: i parse the CSV file, and load it into a database. to get the frequencies of the combinations of values, i simply issue select statements. this solution works, but is not optimal. what i am finding is that the database calls take up the majority of the time leading to poor performance (even when the database is local, not on a network, tested with Oracle and MySQL). i am wondering, is there is a better way to achieve this type of frequency count?
please note that the categorical variables may not all be binary. if they are all binary, there are up to 2^N unique combinations of values to keep track of. however, the categorical variables have at least 2 or more values. for example, we can have N=3 with the values of
X1 : {val1, val2, val3},
X2 : {valR, valS, valT}, and
X3 : {valU, valV, valZ, valA}.
please note that i am not querying for just one variable-value frequency (i.e. X1=val1} or dealing with just boolean (true/false) categorical variables (i.e. X1=false). i have to query for combinations of values. for example, the following are types of filters that i may need:
1. X1=val1 and X2=valR and X3=valA
2. X1=val1 and X3=valV
3. X1=val3
thanks.Okay, I understand.
You don't necessarily need an actual B(+)-tree class. You can mimic the characteristics of such a structure using some nested Maps and a Set:
Map<String, Map<String, Set<Combination>>>
// | | |
// | | +--------> the combinations (eg: ['val1', 'valT', 'valV'])
// | |
// | +-----------------------> the value (eg: 'val1')
// |
// +------------------------------------> the variable (eg: 'X1')Given the following data file (data.txt):
X1 , X2 , X3
val1, valR, valV
val2, valS, valA
val1, valR, valV
val1, valS, valZ
val1, valR, valA
val2, valS, valA
val1, valT, valZ
val1, valR, valA
val3, valR, valA
val3, valR, valU
val1, valT, valZ
val1, valR, valV
val2, valS, valA
val2, valT, valA
val1, valT, valV
val3, valS, valU
val2, valS, valA
val1, valT, valZ
val2, valS, valAHere's even a quick test harness:
public class Main {
public static void main(String[] args) throws Exception {
DataFile dataFile = new DataFile("data.txt");
System.out.println(dataFile);
// X1=val1 and X2=valR and X3=valA
for(Combination c : dataFile.getCombinations("X1=val1", "X2=valR", "X3=valA")) {
System.out.println(c);
System.out.println();
// X1=val1 and X3=valV
for(Combination c : dataFile.getCombinations("X1=val1", "X3=valV")) {
System.out.println(c);
System.out.println();
// X1=val3
for(Combination c : dataFile.getCombinations("X1=val3")) {
System.out.println(c);
System.out.println();
class DataFile {
private String[] variableNames;
private Map<String, Map<String, Set<Combination>>> dataMap;
DataFile(String fileName) throws FileNotFoundException {
dataMap = new HashMap<String, Map<String, Set<Combination>>>();
read(fileName);
private void associate(String variable, String value, Combination comb) {
Map<String, Set<Combination>> variableMap = dataMap.remove(variable);
Set<Combination> combinations = variableMap.remove(value);
if(combinations == null) combinations = new HashSet<Combination>();
combinations.add(comb);
variableMap.put(value, combinations);
dataMap.put(variable, variableMap);
Set<Combination> getCombinations(String... constraints) {
Set<Combination> combinations = new HashSet<Combination>();
boolean firstRun = true;
for(String constraint : constraints) {
String[] keyValue = constraint.split("=");
String variable = keyValue[0];
String value = keyValue[1];
Set<Combination> temp = dataMap.get(variable).get(value);
if(firstRun) {
firstRun = false;
combinations.addAll(temp);
} else {
combinations.retainAll(temp);
return combinations;
private void read(String fileName) throws FileNotFoundException {
Scanner file = new Scanner(new File(fileName));
variableNames = file.nextLine().trim().split("\\s*+,\\s*+");
for(String varName : variableNames) {
dataMap.put(varName, new HashMap<String, Set<Combination>>());
int row = 2;
while(file.hasNextLine()) {
String[] line = file.nextLine().trim().split("\\s*+,\\s*+");
Combination comb = new Combination(line, row);
for(int i = 0; i < line.length; i++) {
associate(variableNames, line[i], comb);
row++;
@Override
public String toString() {
StringBuilder b = new StringBuilder("DataFile=[\n");
for(String var : dataMap.keySet()) {
b.append(" "+var+"\n");
Map<String, Set<Combination>> values = dataMap.get(var);
for(String val : values.keySet()) {
b.append(" "+val+"\n");
Set<Combination> combinations = values.get(val);
for(Combination comb : combinations) {
b.append(" "+comb+"\n");
return b.append("]").toString();
class Combination {
private final String[] values;
private final int row;
Combination(String[] v, int r) {
values = v;
row = r;
@Override
public boolean equals(Object o) {
if(o == null || this.getClass() != o.getClass()) return false;
Combination that = (Combination)o;
return this.row == that.row;
@Override
public int hashCode() {
return row;
@Override
public String toString() {
return String.format("row=%d, values=%s", row, Arrays.toString(values));
}{code}
As you can see, after building the data file, the two main-query operations are two O(1)'s:
{code}// in DataFile#getCombinations(String...)
Set<Combination> temp = dataMap.get(variable).get(value);{code}
The code above should of course be properly tested, the getCombinations(...) should be re-factored so that it doesn't rely on split(...) to get the variables and values information, etc.
Good luck! -
Repeat Column Value derived from formulae
I need to repeat a value derived for a particular column &row combination to all rows in the table. however when I put the 'case' function it does not repeat the values as I expect. The value is only shown in the 1at row
The formulae used is as follows
CASE WHEN D_MILLS.MILL_NO =1 THEN (F_MILL_PRODUCTION.FN_YTD_CPO_PRODUCED/F_MILL_PRODUCTION.FN_YTD_CROP_PROCESSED ) ELSE (FILTER(F_MILL_PRODUCTION.FN_YTD_CPO_PRODUCED/F_MILL_PRODUCTION.FN_YTD_CROP_PROCESSED USING (D_MILLS.MILL_NO = 1.00)) ) END
Note : Mill No is the Dimension Field that I user for the report. and I require the value of derived for mill No =1 to be repeated to all other mills (Mill No =2,3 etc)
Edited by: Shaz01 on Aug 7, 2010 1:45 AMHi,
Use the formula like this,
Max(CASE WHEN D_MILLS.MILL_NO =1 THEN (F_MILL_PRODUCTION.FN_YTD_CPO_PRODUCED/F_MILL_PRODUCTION.FN_YTD_CROP_PROCESSED ) ELSE (FILTER(F_MILL_PRODUCTION.FN_YTD_CPO_PRODUCED/F_MILL_PRODUCTION.FN_YTD_CROP_PROCESSED USING (D_MILLS.MILL_NO = 1.00)) ) END)
Also for all the values you need to show the value of Mill no 1, so no need for case statement.
MAX(FILTER(F_MILL_PRODUCTION.FN_YTD_CPO_PRODUCED/F_MILL_PRODUCTION.FN_YTD_CROP_PROCESSED USING (D_MILLS.MILL_NO = 1.00)))
Thanks,
Vino -
Editable WD ALV - Cell Read Only
Hi,
Is there any way for making an individual cell read only for a column-row combination.
My scenario is :
I have an editable WD ALV with data. Now , in row(e.g Index 4) entry of the table, I want to make column2 and Column3 read only for that particular entry(row no.- Index) !! Please guide me through if there is any way to do the same.
Best Regards
SidYou will have to create an extra attribute for each of your column, of type boolean under the same node. Now bind these attributes to the read only property of your column's cell editors.
Then pass abap_true/abap_false to make the field editable/reaonly.
Check this wiki for your reference,
check this Wiki [https://wiki.sdn.sap.com/wiki/display/WDABAP/How%20to%20edit%20conditionally%20row%20of%20a%20ALV%20table%20in%20Web%20Dynpro%20for%20ABAP]
Regards,
Radhika. -
Hi All,
I've a source which contains data for several measures in one table.
Now all my measures will not have data for certain rows (i.e. all sparse rows combinations in my case). So now are there any advantages for changing #missing values to 0(zeros) and loading the data? Ideally for #missing or 0 both will have to be in cells in data blocks which are already created for dense measures.#missing means there is no data and 0 means the data is there but zero.
It makes differance in the compressed block.
see the below link.
http://download.oracle.com/docs/cd/E12825_01/epm.111/esb_dbag/frameset.htm?dstalloc.htm#dstalloc1011245
Maybe you are looking for
-
Syncing iPhone for the first time- all new items deleted & replaced w/ old.
Somebody, please help me. I've been looking through these threads for the last hour now and can't find any answers to my question. I recently had my iPhone stolen and had to buy a new one. I bought the iPhone 4 and tried to sync it to my computer and
-
Every time I select an emoji to send in a message, my message closes out. I have tried everything, shutting the phone down, re-installing, etc. Help!
-
Why do windows resize after connecting to video?
When I plug my MacBook into a video display - my TV or an LCD projector - the windows of open programs resize, mainly get smaller (not full screen) and get stretch horizontally. This makes it difficult to manage applications while on video, and make
-
I just installed a new font to use on a title sequence. I installed it on the system using Font Book. It's in Font Book. But when I look for it inside the title properties in FCP, it's not there! It's not even listed! Does anyone know how to fix this
-
Network issues with Leopard fresh install
I have an iBook that I was running Leopard on with no issues yesterday. It is no longer my main computer, so I wanted to wipe everything and start from scratch with a fresh install of Leopard. The install went smoothly. I did an Erase & Install setup