Matching 2 Data Sets
Hello -
I am trying to match up 2 separate sources of data. For example, let's say I have a set of wine data that is categoried by varietal, region, year, winery, etc. And let's say I get a 2nd set of wine data that is not categorized. I was thinking setting up
some Named Entity Recognitions would be a good way to do this, but I can't for the life of me figure out how to do that. Any thoughts on that approach? Suggestion on a different approach? Maybe loading up training data and then have the system predict the
different classifications based on a single input string for each wine?
Any help is much appreciated. Thank you!
Scott
Scott
Thanks for the reply, Roope! So, the second data set would be a simple single column. For example, a value might be "2012 Talbott Chardonnay Sleepy Hollow Vineyard". I can certainly parse some of that stuff out (liek year), but it gets difficult
when there are tens of thousands of records and there are always slight inconsistencies (like the word "Winery" being inserted of "Ch." instead of "Chateau" and a hundred other things).
Is this kind of like what you mean?
http://giventocode.com/build-a-recommendations-system-for-your-blog-or-web-site-using-azure-machine-learning-and-azure-mobile-services#.VHSRkovF-Xw
Or is there an article or some settings or an article that you can refer me to that might be helpful? Thanks!
Scott
Scott
Similar Messages
-
Using alternate rows with Spry Data Set
Can anyone provide more information on how to implement alternate row colors with a Spry Data Set? What I have done is:
Create spry data set using HTML page as the data set
The html page that displays the data set is: <div spry:region="equipment">
In the default.css page that is linked, I added the info in note #1 below
When the page displays, no colors appear. Tried changing the colors and still no luck. Is there another step to do?
Note #1:
within the default.css, the following are declared:
#equipment odd {
background-color: #CCC;
#equipment even {
background-color: “#F2F2F2;
Note #2:
Here is a link to see the actual pages referenced above.
Any ideas? Getting frustrated!! Thanks in advance for any advice.You are going to kick yourself, but, you haven't assigned any page element the ID #equipment. The region is called that but the table or the DIV does not have an idea and your selector matches nothing...
-
Regarding Upload of data into SAP from Application server using DATA SETS
Hi all,
I have a problem when uploading data that is application server [.txt file] into SAP using OPEN DATA SET,READ DATA SET & CLOSE DATA SET it is now going to dump. The file is actually splits the fields by using Tab Delimiter.
During uploading some junk values are coming with '#' so it going to dump and giving follow type of error.
Runtime Errors CONVT_NO_NUMBER
Exception CX_SY_CONVERSION_NO_NUMBER
Unable to interpret "#0#0#0#0#0#0#0#" as a number.
Can any one solve the above issue as i need it urgently.
Thanks in advance.
Thanks & Regards,
Rayeezuddin.Hi Hielman,
Thnaks for that reply and for effort you are putting to solve my issue.
I had done the same thing what u have posted prior to your reply but still i am getting dump.
FORM f_get_legacy_data .
DATA: l_tab type xstring,
l_tab1(1) type c,
s type x.
move '23' to l_tab.
move l_tab to l_tab1.
OPEN DATASET v_pfile FOR INPUT IN TEXT MODE ENCODING DEFAULT.
OPEN DATASET v_pfile FOR INPUT IN TEXT MODE ENCODING DEFAULT.
IF sy-subrc <> 0.
MESSAGE text-207 TYPE c_e.
ELSE.
DO.
CLEAR wa_input1.
READ DATASET v_pfile INTO wa_input1.
READ DATASET v_pfile INTO wa_read.
IF sy-subrc EQ 0.
move wa_read to i_txt-txt.
append i_txt.
ELSE.
EXIT.
ENDIF.
ENDDO.
s = '09'.
loop at i_txt.
move i_txt-txt+10(1) to l_tab1.
move '#' to l_tab1.
split i_txt-txt at s into wa_input1-vbeln wa_input1-posnr
split i_txt-txt at '#' into wa_input1-vbeln wa_input1-posnr
wa_input1-per0bal wa_input1-per1val
wa_input1-per2val wa_input1-per3val
wa_input1-per4val wa_input1-per5val
wa_input1-per6val wa_input1-per7val
wa_input1-per8val wa_input1-per9val
wa_input1-per10val wa_input1-per11val
wa_input1-per12val.
APPEND wa_input1 TO i_input1.
CLEAR wa_input1.
endloop.
ENDIF.
CLOSE DATASET v_pfile.
IF i_input1[] IS INITIAL.
If there is no data in the legacy file or if the structure of the
legacy data does not match with that of internal table error message
need to be displayed.
MESSAGE text-211 TYPE c_e.
*&--begin of change--
ELSE.
CLEAR: wa_input, wa_input1.
LOOP AT i_input1 INTO wa_input1.
MOVE wa_input1 TO wa_input.
MOVE: wa_input1-vbeln TO wa_input-vbeln,
wa_input1-posnr TO wa_input-posnr,
wa_input1-per0bal TO wa_input-per0bal,
wa_input1-per1val TO wa_input-per1val,
wa_input1-per2val TO wa_input-per2val,
wa_input1-per3val TO wa_input-per3val,
wa_input1-per4val TO wa_input-per4val,
wa_input1-per5val TO wa_input-per5val,
wa_input1-per6val TO wa_input-per6val,
wa_input1-per7val TO wa_input-per7val,
wa_input1-per8val TO wa_input-per8val,
wa_input1-per9val TO wa_input-per9val,
wa_input1-per10val TO wa_input-per10val,
wa_input1-per11val TO wa_input-per11val,
wa_input1-per12val TO wa_input-per12val.
APPEND wa_input TO i_input.
CLEAR: wa_input, wa_input1.
ENDLOOP.
ENDIF.
ENDFORM. " GET_LEGACY_DATA
When i am giving input as
Directory /pw/data/erp/D5S/fi/up
Name: Backlog1616_D1S.txt
BKCOPO1 BKSOI1 1000.00 100.00 -200.00 0 0 0 0 0 0 0
BKSOPO2 BKSOI2 2222.22 0 300 0 0 0 0 0 0 0
BKSOPO3 BKSOI3 -3000 400 0 0 0 0 0 0 0 0
BKSOPO4 4000.55 500 600 0 0 0 0 0 0 0
0040000000 000010 -100 -110 -110 0 0 -600 0 0 0 0
0040000001 000010 -110 -110 0 0 0 -610 0 0 0 0
I am getting i_input internal table populated as follows at the end of that subroutine.
After appending [APPEND wa_input TO i_input].
BKCOPO1#BK|000000| 0.00 | 0.00 | 0.00 |
BKSOPO2#BK|000000| 0.00 | 0.00 | 0.00 |
BKCOPO3#BK|000000| 0.00 | 0.00 | 0.00 |
BKCOPO4##4|000000| 0.00 | 0.00 | 0.00 |
0040000000|000000| 0.00 | 0.00 | 0.00 |
0040000001|000000| 0.00 | 0.00 | 0.00 |
And output is showing erronious records: 6
No entries inserted.
Can you solve this issue. -
10g: parallel pipelined table func - distributing DISTINCT data sets
Hi,
i want to distribute data records, selected from cursor, via parallel pipelined table function to multiple worker threads for processing and returning result records.
The tables, where i am selecting data from, are partitioned and subpartitioned.
All tables share the same partitioning/subpartitioning schema.
Each table has a column 'Subpartition_Key', which is hashed to a physical subpartition.
E.g. the Subpartition_Key ranges from 000...999, but we have only 10 physical subpartitions.
The select of records is done partition-wise - one partition after another (in bulks).
The parallel running worker threads select more data from other tables for their processing (2nd level select)
Now my goal is to distribute initial records to the worker threads in a way, that they operate on distinct subpartitions - to decouple the access to resources (for the 2nd level select)
But i cannot just use 'parallel_enable(partition curStage1 by hash(subpartition_key))' for the distribution.
hash(subpartition_key) (hashing A) does not match with the hashing B used to assign the physical subpartition for the INSERT into the tables.
Even when i remodel the hashing B, calculate some SubPartNo(subpartition_key) and use that for 'parallel_enable(partition curStage1 by hash(SubPartNo))' it doesn't work.
Also 'parallel_enable(partition curStage1 by range(SubPartNo))' doesn't help. The load distribution is unbalanced - some worker threads get data of one subpartition, some of multiple subpartitions, some are idle.
How can i distribute the records to the worker threads according a given subpartition-schema?
+[amendment:+
Actually the hashing for the parallel_enable is counterproductive here - it would be better to have some 'parallel_enable(partition curStage1 by SubPartNo)'.]
- many thanks!
best regards,
Frank
Edited by: user8704911 on Jan 12, 2012 2:51 AMHello
A couple of things to note. 1, when you use partition by hash(or range) on 10gr2 and above, there is an additional BUFFER SORT operation vs using partition by ANY. For small datasets this is not necessarily an issue, but the temp space used by this stage can be significant for larger data sets. So be sure to check temp space usage for this process or you could run into problems later.
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 8168 | 1722K| 24 (0)| 00:00:01 | | | | | |
| 1 | PX COORDINATOR | | | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10001 | 8168 | 1722K| 24 (0)| 00:00:01 | | | Q1,01 | P->S | QC (RAND) |
| 3 |****BUFFER SORT**** | | 8168 | 1722K| | | | | Q1,01 | PCWP | |
| 4 | VIEW | | 8168 | 1722K| 24 (0)| 00:00:01 | | | Q1,01 | PCWP | |
| 5 | COLLECTION ITERATOR PICKLER FETCH| TF | | | | | | | Q1,01 | PCWP | |
| 6 | PX RECEIVE | | 100 | 4800 | 2 (0)| 00:00:01 | | | Q1,01 | PCWP | |
| 7 | PX SEND HASH | :TQ10000 | 100 | 4800 | 2 (0)| 00:00:01 | | | Q1,00 | P->P | HASH |
| 8 | PX BLOCK ITERATOR | | 100 | 4800 | 2 (0)| 00:00:01 | 1 | 10 | Q1,00 | PCWC | |
| 9 | TABLE ACCESS FULL | TEST_TAB | 100 | 4800 | 2 (0)| 00:00:01 | 1 | 20 | Q1,00 | PCWP | |
-----------------------------------------------------------------------------------------------------------------------------------------------It may be in this case that you can use clustering with partition by any to achieve your goal...
create or replace package test_pkg as
type Test_Tab_Rec_t is record (
Tracking_ID number(19),
Partition_Key date,
Subpartition_Key number(3),
sid number
type Test_Tab_Rec_Tab_t is table of Test_Tab_Rec_t;
type Test_Tab_Rec_Hash_t is table of Test_Tab_Rec_t index by binary_integer;
type Test_Tab_Rec_HashHash_t is table of Test_Tab_Rec_Hash_t index by binary_integer;
type Cur_t is ref cursor return Test_Tab_Rec_t;
procedure populate;
procedure report;
function tf(cur in Cur_t)
return test_list pipelined
parallel_enable(partition cur by hash(subpartition_key));
function tf_any(cur in Cur_t)
return test_list PIPELINED
CLUSTER cur BY (Subpartition_Key)
parallel_enable(partition cur by ANY);
end;
create or replace package body test_pkg as
procedure populate
is
Tracking_ID number(19) := 1;
Partition_Key date := current_timestamp;
Subpartition_Key number(3) := 1;
begin
dbms_output.put_line(chr(10) || 'populate data into Test_Tab...');
for Subpartition_Key in 0..99
loop
for ctr in 1..1
loop
insert into test_tab (tracking_id, partition_key, subpartition_key)
values (Tracking_ID, Partition_Key, Subpartition_Key);
Tracking_ID := Tracking_ID + 1;
end loop;
end loop;
dbms_output.put_line('...done (populate data into Test_Tab)');
end;
procedure report
is
recs Test_Tab_Rec_Tab_t;
begin
dbms_output.put_line(chr(10) || 'list data per partition/subpartition...');
for item in (select partition_name, subpartition_name from user_tab_subpartitions where table_name='TEST_TAB' order by partition_name, subpartition_name)
loop
dbms_output.put_line('partition/subpartition = ' || item.partition_name || '/' || item.subpartition_name || ':');
execute immediate 'select * from test_tab SUBPARTITION(' || item.subpartition_name || ')' bulk collect into recs;
if recs.count > 0
then
for i in recs.first..recs.last
loop
dbms_output.put_line('...' || recs(i).Tracking_ID || ', ' || recs(i).Partition_Key || ', ' || recs(i).Subpartition_Key);
end loop;
end if;
end loop;
dbms_output.put_line('... done (list data per partition/subpartition)');
end;
function tf(cur in Cur_t)
return test_list pipelined
parallel_enable(partition cur by hash(subpartition_key))
is
sid number;
input Test_Tab_Rec_t;
output test_object;
begin
select userenv('SID') into sid from dual;
loop
fetch cur into input;
exit when cur%notfound;
output := test_object(input.tracking_id, input.partition_key, input.subpartition_key,sid);
pipe row(output);
end loop;
end;
function tf_any(cur in Cur_t)
return test_list PIPELINED
CLUSTER cur BY (Subpartition_Key)
parallel_enable(partition cur by ANY)
is
sid number;
input Test_Tab_Rec_t;
output test_object;
begin
select userenv('SID') into sid from dual;
loop
fetch cur into input;
exit when cur%notfound;
output := test_object(input.tracking_id, input.partition_key, input.subpartition_key,sid);
pipe row(output);
end loop;
end;
end;
XXXX> with parts as (
2 select --+ materialize
3 data_object_id,
4 subobject_name
5 FROM
6 user_objects
7 WHERE
8 object_name = 'TEST_TAB'
9 and
10 object_type = 'TABLE SUBPARTITION'
11 )
12 SELECT
13 COUNT(*),
14 parts.subobject_name,
15 target.sid
16 FROM
17 parts,
18 test_tab tt,
19 test_tab_part_hash target
20 WHERE
21 tt.tracking_id = target.tracking_id
22 and
23 parts.data_object_id = DBMS_MView.PMarker(tt.rowid)
24 GROUP BY
25 parts.subobject_name,
26 target.sid
27 ORDER BY
28 target.sid,
29 parts.subobject_name
30 /
XXXX> INSERT INTO test_tab_part_hash select * from table(test_pkg.tf(CURSOR(select * from test_tab)))
2 /
100 rows created.
Elapsed: 00:00:00.14
XXXX>
XXXX> INSERT INTO test_tab_part_any_cluster select * from table(test_pkg.tf_any(CURSOR(select * from test_tab)))
2 /
100 rows created.
--using partition by hash
XXXX> with parts as (
2 select --+ materialize
3 data_object_id,
4 subobject_name
5 FROM
6 user_objects
7 WHERE
8 object_name = 'TEST_TAB'
9 and
10 object_type = 'TABLE SUBPARTITION'
11 )
12 SELECT
13 COUNT(*),
14 parts.subobject_name,
15 target.sid
16 FROM
17 parts,
18 test_tab tt,
19 test_tab_part_hash target
20 WHERE
21 tt.tracking_id = target.tracking_id
22 and
23 parts.data_object_id = DBMS_MView.PMarker(tt.rowid)
24 GROUP BY
25 parts.subobject_name,
26 target.sid
27 /
COUNT(*) SUBOBJECT_NAME SID
3 SYS_SUBP31 1272
1 SYS_SUBP32 1272
1 SYS_SUBP33 1272
3 SYS_SUBP34 1272
1 SYS_SUBP36 1272
1 SYS_SUBP37 1272
3 SYS_SUBP38 1272
1 SYS_SUBP39 1272
1 SYS_SUBP32 1280
2 SYS_SUBP33 1280
2 SYS_SUBP34 1280
1 SYS_SUBP35 1280
2 SYS_SUBP36 1280
1 SYS_SUBP37 1280
2 SYS_SUBP38 1280
1 SYS_SUBP40 1280
2 SYS_SUBP33 1283
2 SYS_SUBP34 1283
2 SYS_SUBP35 1283
2 SYS_SUBP36 1283
1 SYS_SUBP37 1283
1 SYS_SUBP38 1283
2 SYS_SUBP39 1283
1 SYS_SUBP40 1283
1 SYS_SUBP32 1298
1 SYS_SUBP34 1298
1 SYS_SUBP36 1298
2 SYS_SUBP37 1298
4 SYS_SUBP38 1298
2 SYS_SUBP40 1298
1 SYS_SUBP31 1313
1 SYS_SUBP33 1313
1 SYS_SUBP39 1313
1 SYS_SUBP40 1313
1 SYS_SUBP32 1314
1 SYS_SUBP35 1314
1 SYS_SUBP38 1314
1 SYS_SUBP40 1314
2 SYS_SUBP33 1381
1 SYS_SUBP34 1381
1 SYS_SUBP35 1381
3 SYS_SUBP36 1381
3 SYS_SUBP37 1381
1 SYS_SUBP38 1381
2 SYS_SUBP36 1531
1 SYS_SUBP37 1531
2 SYS_SUBP38 1531
1 SYS_SUBP39 1531
1 SYS_SUBP40 1531
2 SYS_SUBP33 1566
1 SYS_SUBP34 1566
1 SYS_SUBP35 1566
1 SYS_SUBP37 1566
1 SYS_SUBP38 1566
2 SYS_SUBP39 1566
3 SYS_SUBP40 1566
1 SYS_SUBP32 1567
3 SYS_SUBP33 1567
3 SYS_SUBP35 1567
3 SYS_SUBP36 1567
1 SYS_SUBP37 1567
2 SYS_SUBP38 1567
62 rows selected.
--using partition by any cluster by subpartition_key
Elapsed: 00:00:00.26
XXXX> with parts as (
2 select --+ materialize
3 data_object_id,
4 subobject_name
5 FROM
6 user_objects
7 WHERE
8 object_name = 'TEST_TAB'
9 and
10 object_type = 'TABLE SUBPARTITION'
11 )
12 SELECT
13 COUNT(*),
14 parts.subobject_name,
15 target.sid
16 FROM
17 parts,
18 test_tab tt,
19 test_tab_part_any_cluster target
20 WHERE
21 tt.tracking_id = target.tracking_id
22 and
23 parts.data_object_id = DBMS_MView.PMarker(tt.rowid)
24 GROUP BY
25 parts.subobject_name,
26 target.sid
27 ORDER BY
28 target.sid,
29 parts.subobject_name
30 /
COUNT(*) SUBOBJECT_NAME SID
11 SYS_SUBP37 1253
10 SYS_SUBP34 1268
4 SYS_SUBP31 1289
10 SYS_SUBP40 1314
7 SYS_SUBP39 1367
9 SYS_SUBP35 1377
14 SYS_SUBP36 1531
5 SYS_SUBP32 1572
13 SYS_SUBP33 1577
17 SYS_SUBP38 1609
10 rows selected.Bear in mind though that this does require a sort of the incomming dataset but does not require buffering of the output...
PLAN_TABLE_OUTPUT
Plan hash value: 2570087774
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 8168 | 1722K| 24 (0)| 00:00:01 | | | | | |
| 1 | PX COORDINATOR | | | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10000 | 8168 | 1722K| 24 (0)| 00:00:01 | | | Q1,00 | P->S | QC (RAND) |
| 3 | VIEW | | 8168 | 1722K| 24 (0)| 00:00:01 | | | Q1,00 | PCWP | |
| 4 | COLLECTION ITERATOR PICKLER FETCH| TF_ANY | | | | | | | Q1,00 | PCWP | |
| 5 | SORT ORDER BY | | | | | | | | Q1,00 | PCWP | |
| 6 | PX BLOCK ITERATOR | | 100 | 4800 | 2 (0)| 00:00:01 | 1 | 10 | Q1,00 | PCWC | |
| 7 | TABLE ACCESS FULL | TEST_TAB | 100 | 4800 | 2 (0)| 00:00:01 | 1 | 20 | Q1,00 | PCWP | |
----------------------------------------------------------------------------------------------------------------------------------------------HTH
David -
Table Comparison to pull back non matching data
Hello,
I have been working on the below for a few days now and cannot figure out how to get the desired results. Was hoping someone could point me in the right direction. I have attached the data below and the queries I have come up with so far. I only need data for MU_ID (3,7,4) and only need SKILL_NM ('THICV','HELPDESK_FOUNDATIONAL','SPANISH','AUTO','HELPDESK_COMPLEX','HOUSE_COMPLEX','BOAT','HOUSE','HELPDESK','HELPDESK_MODERATE') as there are hundreds more in the actual tables. I also have the problem of the skill levels for the foundational, moderate, complex skill names from the IEX table. If SKILL_LEVEL is 0-2 on the GEN table they are listed as _FOUNDATIONAL in the IEX table, 3-7 is _MODERATE, 8-10 is _COMPLEX but only for the SKILL_NM 'HELPDESK' & 'HOUSE'.
CREATE TABLE IEX(
MU_ID NUMBER(5),
AGENT_NM VARCHAR2(30),
EXTERNAL_ID VARCHAR2(8),
SKILL_NM VARCHAR2(50))
CREATE TABLE GEN(
USER_ID VARCHAR2(8),
SKILL_NM VARCHAR2(255),
SKILL_LEVEL NUMBER(10))
INSERT INTO IEX(MU_ID,AGENT_NM,EXTERNAL_ID,SKILL_NM)VALUES(3,'ROBERTS,CHRIS','ROBERT1','THICV')
INSERT INTO IEX(MU_ID,AGENT_NM,EXTERNAL_ID,SKILL_NM)VALUES(3,'ROBERTS,CHRIS','ROBERT1','HELPDESK_FOUNDATIONAL')
INSERT INTO IEX(MU_ID,AGENT_NM,EXTERNAL_ID,SKILL_NM)VALUES(7,'SEW,HEATHER','SEW1','SPANISH')
INSERT INTO IEX(MU_ID,AGENT_NM,EXTERNAL_ID,SKILL_NM)VALUES(7,'SEW,HEATHER','SEW1','AUTO')
INSERT INTO IEX(MU_ID,AGENT_NM,EXTERNAL_ID,SKILL_NM)VALUES(4,'PRATT,MIKE','PRATT2','HOUSE_COMPLEX')
INSERT INTO IEX(MU_ID,AGENT_NM,EXTERNAL_ID,SKILL_NM)VALUES(4,'PRATT,MIKE','PRATT2','HELPDESK_MODERATE')
INSERT INTO IEX(USER_ID,SKILL_NM,SKILL_LEVEL)VALUES('ROBERT1','THICV',1)
INSERT INTO IEX(USER_ID,SKILL_NM,SKILL_LEVEL)VALUES('ROBERT1','HELPDESK',7)
INSERT INTO IEX(USER_ID,SKILL_NM,SKILL_LEVEL)VALUES('SEW1','SPANISH',1)
INSERT INTO IEX(USER_ID,SKILL_NM,SKILL_LEVEL)VALUES('SEW1','BOAT',1)
INSERT INTO IEX(USER_ID,SKILL_NM,SKILL_LEVEL)VALUES('PRATT2','HOUSE',9)
INSERT INTO IEX(USER_ID,SKILL_NM,SKILL_LEVEL)VALUES('PRATT2','HELPDESK',2)
DESIRED RESULTS:
MU_ID AGENT_NM EXTERNAL_ID IEX_SKILL_NM GEN_SKILL_NM SKILL_LEVEL
3 ROBERTS,CHRIS ROBERT1 NULL HELPDESK 7
3 ROBERTS,CHRIS ROBERT1 HELPDESK_FOUNDATIONAL NULL NULL
7 SEW,HEATHER SEW1 AUTO NULL NULL
7 SET,HEATHER SEW1 NULL BOAT 1
4 PRATT,MIKE PRATT2 HELPDESK_MODERATE NULL NULL
4 PRATT,MIKE PRATT2 NULL HELPDESK 2
I wrote the 2 below queries, the first one is getting some of the data I need, but not all of it. The second one was something I was playing around with to see if it would do that I need, looks kinda like it works but pulling back way more data than I need and I cannot figure out how to have the skill level data in it.
SELECT
A.MU_ID,
A.AGENT_NM,
A.EXTERNAL_ID,
A.SKILL_NM
FROM IEX A
WHERE
A.mu_id IN
('3', '4', '7') AND
UPPER (A.AGENT_NM) NOT LIKE ('%Temp%') AND
A.EXTERNAL_ID IS NOT NULL
AND A.SKILL_NM NOT IN
(SELECT B.SKILL_NM
FROM GEN B
WHERE A.EXTERNAL_ID = B.USER_ID
and A.SKILL_NM = B.SKILL_NM)
ORDER BY AGENT_NM ASC
(SELECT
A.EXTERNAL_ID,
A.SKILL_NM
FROM
IEX A
WHERE
A.MU_ID IN ('3', '4', '7')
MINUS
SELECT
B.USER_ID,
B.SKILL_NM
FROM
GEN B
WHERE
B.SKILL_NM IN
('THICV',
'HELPDESK_FOUNDATIONAL',
'SPANISH',
'AUTO',
'HELPDESK_COMPLEX',
'HOUSE_COMPLEX',
'BOAT',
'HOUSE',
'HELPDESK',
'HELPDESK_MODERATE'))
UNION ALL
(SELECT
B.USER_ID,
B.SKILL_NM
FROM
GEN B
WHERE
B.SKILL_NM IN
('THICV',
'HELPDESK_FOUNDATIONAL',
'SPANISH',
'AUTO',
'HELPDESK_COMPLEX',
'HOUSE_COMPLEX',
'BOAT',
'HOUSE',
'HELPDESK',
'HELPDESK_MODERATE')
MINUS
SELECT
A.EXTERNAL_ID,
A.SKILL_NM
FROM
IEX A
WHERE
A.MU_ID IN ('3', '4', '7'))Thanks Frank,
I guess I explained it wrong. What you provided does pull back non matching data but is also pulling back matching data. Below is the exact query I am using and sample data. What is need to show all skill_nm that do not match each other from both table. There are a handful of skill_nm that I have to use a condition with levels to make them match up based on "complex" "moderate" "foundational" but only these few need to have that condition and everything else is just straight up does skill_nm from a = skill_nm.
My current query:
SELECT
A.MU_ID,
A.AGENT_NM,
B.USER_ID AS EXTERNAL_ID,
A.SKILL_NM AS IEX_SKILL_NM,
B.SKILL_NM AS GEN_SKILL_NM,
B.SKILL_LEVEL
FROM
LIGHTHOUSE.IEX_AGT_SKILL A
FULL OUTER JOIN
LIGHTHOUSE.CFG_PERSON_SKILL_VALUES B
ON A.EXTERNAL_ID = B.USER_ID AND
A.SKILL_NM = B.SKILL_NM
|| CASE
WHEN B.SKILL_NM NOT IN ('THIPayment','THIPL','SPSC') THEN NULL
WHEN B.SKILL_LEVEL <= 2 THEN '_FOUNDATIONAL'
WHEN B.SKILL_LEVEL <= 7 THEN '_MODERATE'
WHEN B.SKILL_LEVEL <= 10 THEN '_COMPLEX'
END AND
A.MU_ID IN
('3','4','5','6','7','12','14','220','222','410','411','412','413','414','415','480','600','650','717','720','721',
'722','723','800','801','3008','3010','3012','3100','4200','4201','4202','4203','4400','4401','4402','4404')
Doing this, it is looking at the SKILL_LEVEL for all SKILL_NM and pulling back things that do match, but not on the level. I only need to have skill level match up for:
GENESYS
IEX
SKILL LEVEL
THIPayment
THIPayment_Complex
8 to 9
THIPayment_Foundational
0-1
THIPayment_Moderate
2 to 7
THIPL
THIPL_Complex
8 to 9
THIPL_Foundational
0-1
THIPL_Moderate
2 to 7
SPSC
SPSC_Foundational
0- 1
SPSC_Moderate
2 to 7
PLSCLegacy
PLSCLegacy_Complex
8 to 9
PLSCLegacy_Foundational
0- 1
PLSCLegacy_Moderate
2 to 7
PLSCPCIO
PLSCPCIO_Complex
8 to 9
PLSCPCIO_Foundational
0- 1
PLSCPCIO_Moderate
2 to 7
CREATE TABLE IEX_AGT_SKILL(
MU_ID NUMBER(5),
AGENT_NM VARCHAR2(30),
EXTERNAL_ID VARCHAR2(8),
SKILL_NM VARCHAR2(50))
CREATE TABLE CFG_PERSON_SKILL_VALUES(
USER_ID VARCHAR2(8),
SKILL_NM VARCHAR2(255),
SKILL_LEVEL NUMBER(10))
INSERT INTO IEX_AGT_SKILL(MU_ID,AGENT_NM,EXTERNAL_ID,SKILL_NM)VALUES(3,'ROBERTS,CHRIS','ROBERT1','THIPayment_FOUNDATIONAL')
INSERT INTO IEX_AGT_SKILL(MU_ID,AGENT_NM,EXTERNAL_ID,SKILL_NM)VALUES(3,'ROBERTS,CHRIS','ROBERT1','SPSC_FOUNDATIONAL')
INSERT INTO IEX_AGT_SKILL(MU_ID,AGENT_NM,EXTERNAL_ID,SKILL_NM)VALUES(7,'SEW,HEATHER','SEW1','SPSC_MODERATE')
INSERT INTO IEX_AGT_SKILL(MU_ID,AGENT_NM,EXTERNAL_ID,SKILL_NM)VALUES(7,'SEW,HEATHER','SEW1','SPSC_BOAT')
INSERT INTO IEX_AGT_SKILL(MU_ID,AGENT_NM,EXTERNAL_ID,SKILL_NM)VALUES(4,'PRATT,MIKE','PRATT2','THIPayment_COMPLEX')
INSERT INTO IEX_AGT_SKILL(MU_ID,AGENT_NM,EXTERNAL_ID,SKILL_NM)VALUES(4,'PRATT,MIKE','PRATT2','HELPDESK')
INSERT INTO CFG_PERSON_SKILL_VALUES(USER_ID,SKILL_NM,SKILL_LEVEL)VALUES('ROBERT1','THIPayment',1)
INSERT INTO CFG_PERSON_SKILL_VALUES(USER_ID,SKILL_NM,SKILL_LEVEL)VALUES('ROBERT1','SPSC',7)
INSERT INTO CFG_PERSON_SKILL_VALUES(USER_ID,SKILL_NM,SKILL_LEVEL)VALUES('SEW1','SPSC',1)
INSERT INTO CFG_PERSON_SKILL_VALUES(USER_ID,SKILL_NM,SKILL_LEVEL)VALUES('SEW1','SPSC_BOAT',1)
INSERT INTO CFG_PERSON_SKILL_VALUES(USER_ID,SKILL_NM,SKILL_LEVEL)VALUES('PRATT2','SPANISH',9)
INSERT INTO CFG_PERSON_SKILL_VALUES(USER_ID,SKILL_NM,SKILL_LEVEL)VALUES('PRATT2','HELPDESK',2)
DESIRED OUTCOME:
MU_ID AGENT_NM EXTERNAL_ID IEX_SKILL_NM GEN_SKILL_NM
3 ROBERTS,CHRIS ROBERT1 SPSC_FOUNDATIONAL
3 ROBERTS,CHRIS ROBERT1 SPSC_MODERATE
7 SEW,HEATHER SEW1 SPSC_MODERATE
7 SEW,HEATHER SEW1 SPSC_FOUNDATIONAL
4 PRATT,MIKE PRATT2 THIPayment_COMPLEX
4 PRATT,MIKE PRATT2 SPANISH -
Hi
I am developing a mobile application about physical fitness for an undergraduate final year project and one of the features I would like to include is to give suggestions to the user on how to make their workout(walking, jogging or running) more effective, for example it will tell the user the best workout plan given their level of physical fitness etc.
I would like to achieve this using data mining, are there any data sets related to this area and if so from where can I get one?
ThanksIf I were u here is the first approach I will use:
You can try to estimate the physical well being score as a user by assigning them EXTREME, GOOD, MEDIUM, POOR scores (notice that this is something subjective you need to define as the domain expert). Than you can use the following variables as the predictors/inputs:
- Average/Median/Maximum/Minimum miles run in a week/day/month in last week/month/quarter/year.
- Average/Median/Maximum/Minimum cardio minutes in a week/day/month in last week/month/quarter/year.
- Average/Median/Maximum/Minimum pounds in bench-press/squad/biceps/triceps in a week/day/month in last week/month/quarter/year.
I think you got the idea for possible inputs you can define. One important issue is that since you don't know whether average or maximum value for an input will work or not so you need to define all of them (all combinations of all dimensions: statistical function, metric itself, time-window size and break-down time-window size). As a result you will have more variables than you need. Feed them into Oracle Data Mining Attribute Importance algorithm to select really relevant ones.
Then build an Oracle decision tree model to estimate physical well being value.
Once you are done you can export the model as an XML file.
Now what you need to do is to generate fitness recommendations to POOR, MEDIUM and GOOD guys. Here is the flow:
1. Take a user and find the matching decision tree rule for it.
2. If it is not classified as EXTREME find the diff of rules between your rule set and rule sets for classes better than your class (XML operation over Decision Tree XML).
3. Choose the minimum sized diff set. Actually these are most probably the changes your user needs to make in his/her fitness program. Present it properly in your mobile application.
Incrementally you can improve your model by adding age, gender, etc. But remember to start as a minimalist. -
Possible feature: Automatic data set linking...
A useful feature to add to Spry: Data set linking within the
definition of a dataset.
Although this can be achieved using filtering or dataset
regeneration programming, it would simplify the coding process
tremendously if done at the dataset definition level.
For example:
var dsCategories = new
Spry.Data.XMLDataSet("cetrgories.xml", "category");
var dsItems = new Spry.Data.XMLDataSet("items.xml",
"category/item", { link: "category = dsCategories.category"});
Here, the 'link' specification would cause the dsItems to be
automatically filtered on the field {category} matching the current
dsCategories row field {category} value when the
dsCategories.CurrentRow changes.Hi B_r_u_n_o,
You can already do this today with XPath filtering as Don
mentioned:
var dsCategories = new Spry.Data.XMLDataSet("cetrgories.xml",
"category");
var dsItems = new Spry.Data.XMLDataSet("items.xml",
"category/item[category = '{dsCategories::category}']");
--== Kin ==-- -
XML Data Set selection by attribute?
I am new to Spry and was trying to work with the XML Data Set
feature. I have an XML file with the schema listed below. I wanted
to know if it were possible to only grab the data from this XML
file if it matches a certain type? For example, grab data from
seminarType where @type="condition1"? Is this able to be done or
will I have to generate an XML file for each type? My goals was to
have one large file to grab data from.
<seminars>
<seminarType type="">
<seminarSession type="">
<seminar>
<location><![CDATA[]]></location>
<date></date>
<time></time>
<seats></seats>
<directions><![CDATA[]]></directions>
</seminar>
</seminarSession>
</seminarType>
</seminars>The seminar node contains all the information I want to grab.
I need to be able to select those nodes based off seminarType @type
and then seminarSession @type.
Example:
<seminars>
<seminarType type="type1">
<seminarSession type="session1">
<seminar>
<location><![CDATA[location]]></location>
<date>12-12-2007</date>
<time>14:00</time>
<seats>23</seats>
<directions><![CDATA[mapquest
directions]]></directions>
</seminar>
</seminarSession>
<seminarSession type="session2">
<seminar>
<location><![CDATA[location]]></location>
<date>10-08-2007</date>
<time>10:00</time>
<seats>15</seats>
<directions><![CDATA[mapquest
directions]]></directions>
</seminar>
</seminarSession>
</seminarType>
</seminars>
So I would want to grab all the session nodes where
seminarType @type="type1" and seminarSession @type="session1"
Does this help? -
Nested Left Outer Join : Data Set
Hi All
I am bit confused about data set used by Nested Left outer join.
Can anyone help me.
Here is sample data:
Tables (Name, 3 Column each, total rows and matched rows if any):
Table 1
A B C
Total 20 Rows
Table 2
A D E
Total 50 Rows and 10 Matching on 2.A = 1.A
Table 3
D M N
Total 15 Rows and 15 Matching on 3.D = 2.D
Table 4
M X Y
Total 20 Rows and 10 Matching on 4.M = 3.M
Sql
select *
From Table 1
Left Outer Join on Table 2 on
2.A = 1.A
-- Data set 1 will contain 20 Rows (10 matching and 10 non matching)
Left Outer Join on Table 3 on
3.D = 2.D
-- What will be data set? 20 Rows of Data set 1 or 15 Matching Rows?
Left Outer Join on Table 4 on
4.M = 3.M
-- What will be data set? X Rows of Data set 2 or 10 Matching Rows?
Please have a look and clear my understanding.SeshuGiri wrote:
I have two tables defined (below). Emp table has data and there is no data in Emp_Type table yet! Right now it is empty.
I want to write a query that returns data from both the tables even though there is no data in Emp_type table. I am using left outer join but it returning nothing. Anyone can help?
select *
from emp e
left outer join emp_Type t
on e.empid = t.empid
WHERE t.type_id = 1
and t.end_date is null;
The join is including all rows from emp, just like you want.
The WHERE clause is discarding all of those rows. Since all the columns from emp_type (alias t) are NULL, the condition "t.type_id = 1" in the WHERE clause is never true.
Perhaps you meant to include all those conditions in the join conditions, like this:
select *
from emp e
left outer join emp_Type t
on e.empid = t.empid
and t.type_id = 1
and t.end_date is null;Edited by: Frank Kulash on Jan 30, 2012 3:56 PM -
Design studio 1.3 - No matching data found in getDataAsString
I had developed a dashboard using DS1.2 and we have no upgraded our Business objects platform to BO 4.1 and upgraded to Design studio 1.3
I am not getting an error message "No matching data found in getDataAsString("006EI2SHD6LIVHQ6UOHCIDRDB", {}).", which worked in DS1.2.
I see the data on a cross table but it is only when I try and set the text as per below coding.
// Master data spend Items
TEXT_1.setText(DS_1.getDataAsString("006EI2SHD6LIVHQ6UOHCIDRDB",{}));com.sap.ip.bi.zen.rt.framework.jsengine.JsEngineException: org.mozilla.javascript.WrappedException: Wrapped java.lang.NullPointerException
at com.sap.ip.bi.zen.rt.framework.jsengine.rhino.RhinoJsEngine.handleError(RhinoJsEngine.java:141)
at com.sap.ip.bi.zen.rt.framework.jsengine.rhino.RhinoJsEngine.doRunScript(RhinoJsEngine.java:70)
at com.sap.ip.bi.zen.rt.framework.jsengine.JsEngine.runScript(JsEngine.java:32)
at com.sap.ip.bi.zen.rt.framework.jsengine.rhino.RhinoScriptInterpreterBialService.interprete(RhinoScriptInterpreterBialService.java:191)
at com.sap.ip.bi.base.command.impl.Command.interprete(Command.java:189)
at com.sap.ip.bi.webapplications.runtime.impl.page.Page.processCommandSequence(Page.java:4662)
at com.sap.ip.bi.webapplications.runtime.impl.page.Page.doProcessRequest(Page.java:2537)
at com.sap.ip.bi.webapplications.runtime.impl.page.Page._processRequest(Page.java:774)
at com.sap.ip.bi.webapplications.runtime.impl.page.Page.processRequest(Page.java:5080)
at com.sap.ip.bi.webapplications.runtime.impl.page.Page.processRequest(Page.java:5073)
at com.sap.ip.bi.webapplications.runtime.impl.controller.Controller.doProcessRequest(Controller.java:1238)
at com.sap.ip.bi.webapplications.runtime.impl.controller.Controller._processRequest(Controller.java:1088)
at com.sap.ip.bi.webapplications.runtime.impl.controller.Controller.processRequest(Controller.java:1054)
at com.sap.ip.bi.webapplications.runtime.impl.controller.Controller.processRequest(Controller.java:1)
at com.sap.ip.bi.server.runtime.sevice.impl.BIRuntimeServerService._handleRequest(BIRuntimeServerService.java:538)
at com.sap.ip.bi.server.runtime.sevice.impl.BIRuntimeServerService.handleRequest(BIRuntimeServerService.java:943)
at com.sap.ip.bi.server.execution.engine.runtime.BIExecutionEngineRuntime.executeRequest(BIExecutionEngineRuntime.java:48)
at com.sap.ip.bi.framework.base.execution.impl.BIExecutionService.executeRequest(BIExecutionService.java:54)
at com.sap.ip.bi.client.execution.AbstractExecutionServlet.handleRequest(AbstractExecutionServlet.java:161)
at com.sap.ip.bi.client.servlet.BIPrivateServlet.handleRequest(BIPrivateServlet.java:36)
at com.sap.ip.bi.client.execution.AbstractExecutionServlet.doPost(AbstractExecutionServlet.java:140)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:755)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
at org.eclipse.equinox.http.servlet.internal.ServletRegistration.service(ServletRegistration.java:61)
at org.eclipse.equinox.http.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:38)
at com.sap.ip.bi.zen.webserver.internal.ZenSessionFilter.doFilter(ZenSessionFilter.java:42)
at org.eclipse.equinox.http.servlet.internal.FilterRegistration.doFilter(FilterRegistration.java:81)
at org.eclipse.equinox.http.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:35)
at org.eclipse.equinox.http.servlet.internal.ProxyServlet.processAlias(ProxyServlet.java:132)
at org.eclipse.equinox.http.servlet.internal.ProxyServlet.service(ProxyServlet.java:60)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
at org.eclipse.equinox.http.jetty.internal.HttpServerManager$InternalHttpServiceServlet.service(HttpServerManager.java:386)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:669)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:457)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:229)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:368)
at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
at org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:953)
at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1014)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:861)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:628)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Unknown Source)
Caused by: org.mozilla.javascript.WrappedException: Wrapped java.lang.NullPointerException
at org.mozilla.javascript.Context.throwAsScriptRuntimeEx(Context.java:1786)
at org.mozilla.javascript.MemberBox.invoke(MemberBox.java:183)
at org.mozilla.javascript.NativeJavaMethod.call(NativeJavaMethod.java:247)
at org.mozilla.javascript.Delegator.call(Delegator.java:229)
at com.sap.ip.bi.zen.rt.framework.jsengine.rhino.shared.CustomFunction.call(CustomFunction.java:36)
at org.mozilla.javascript.Interpreter.interpretLoop(Interpreter.java:1701)
at org.mozilla.javascript.Interpreter.interpret(Interpreter.java:854)
at org.mozilla.javascript.InterpretedFunction.call(InterpretedFunction.java:164)
at org.mozilla.javascript.ContextFactory.doTopCall(ContextFactory.java:426)
at com.sap.ip.bi.zen.rt.framework.jsengine.rhino.CustomContextFactory.doTopCall(CustomContextFactory.java:54)
at org.mozilla.javascript.ScriptRuntime.doTopCall(ScriptRuntime.java:3178)
at org.mozilla.javascript.Context.callFunctionWithContinuations(Context.java:1204)
at org.mozilla.javascript.Context.executeScriptWithContinuations(Context.java:1171)
at com.sap.ip.bi.zen.rt.framework.jsengine.rhino.RhinoJsEngine.doRunScript(RhinoJsEngine.java:60)
... 51 more
Caused by: java.lang.NullPointerException
at com.sap.ip.bi.zen.rt.components.ds.impl.DataSourceCommandResolver.getCell(DataSourceCommandResolver.java:1239)
at com.sap.ip.bi.zen.rt.components.ds.impl.DataSourceCommandResolver.getDataAsString(DataSourceCommandResolver.java:1095)
at sun.reflect.GeneratedMethodAccessor103.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.mozilla.javascript.MemberBox.invoke(MemberBox.java:161)
... 63 more -
Hey
I am currently working on a timetable lookup system. I have 2
tables setup, each with 35 cells and each searching through over
1000 lines of XML. As you can imagine this takes bloomin ages!
Is there any way to speed this process up? You can see the
code I am using below
SCRIPT:
<script type="text/javascript">
<!--
// Decipher the username
var username = '03ROSSMI';
var y7Year = Number(username.substr(0,2));
var firstname3D = username.substr(2,3);
var firstname3D = firstname3D.toUpperCase();
var lastname3D = username.substr(5,7);
var lastname3D = lastname3D.toUpperCase();
// Work out what year the student is in
var date = new Date();
var year = String(date.getFullYear());
var year2D = Number(year.substr(3));
var yearGroup = (year2D - y7Year)+7;
// Dynamically setup the XML data set
var timetable = new
Spry.Data.XMLDataSet("data/year"+yearGroup+"abc.xml",
"SuperStarReport/Record");
function displayData(Name, session, ChosenName, Surname)
// Convert the student's first and last name into a 3 digit
format and make it uppercase
var ChosenName3D = ChosenName.substr(0,3);
var ChosenName3D = ChosenName3D.toUpperCase();
var Surname3D = Surname.substr(0,3);
var Surname3D = Surname3D.toUpperCase();
// If the session name, first name and last name all match
then return true
if(Name == session && ChosenName3D == firstname3D
&& Surname3D == lastname3D)
return true;
else
return false;
</script>HTML:
<div id="ttTitle">Timetable - Adam Smith
JMN</div>
<div class="ttHeaderWrap">
<!-- HEADER INFO -->
<div>Mon A</div>
<div>Tue A</div>
<div>Wed A</div>
<div>Thu A</div>
<div>Fri A</div>
</div>
<div class="ttWrap">
<!-- SESSION 1 -->
<div spry:repeat="timetable"
spry:test="displayData('{Name}', ' Mon A 1', '{ChosenName}',
'{Surname}')">{Description}<br
/>{Initials} {Name1}</div>
<div spry:repeat="timetable"
spry:test="displayData('{Name}', ' Tue A 1', '{ChosenName}',
'{Surname}')">{Description}<br
/>{Initials} {Name1}</div>
<div spry:repeat="timetable"
spry:test="displayData('{Name}', ' Wed A 1', '{ChosenName}',
'{Surname}')">{Description}<br
/>{Initials} {Name1}</div>
<div spry:repeat="timetable"
spry:test="displayData('{Name}', ' Thu A 1', '{ChosenName}',
'{Surname}')">{Description}<br
/>{Initials} {Name1}</div>
<div spry:repeat="timetable"
spry:test="displayData('{Name}', ' Fri A 1', '{ChosenName}',
'{Surname}')">{Description}<br
/>{Initials} {Name1}</div>
<!-- SESSION 2 -->
<div spry:repeat="timetable"
spry:test="displayData('{Name}', ' Mon A 2', '{ChosenName}',
'{Surname}')">{Description}<br
/>{Initials} {Name1}</div>
<div spry:repeat="timetable"
spry:test="displayData('{Name}', ' Tue A 2', '{ChosenName}',
'{Surname}')">{Description}<br
/>{Initials} {Name1}</div>
<div spry:repeat="timetable"
spry:test="displayData('{Name}', ' Wed A 2', '{ChosenName}',
'{Surname}')">{Description}<br
/>{Initials} {Name1}</div>
<div spry:repeat="timetable"
spry:test="displayData('{Name}', ' Thu A 2', '{ChosenName}',
'{Surname}')">{Description}<br
/>{Initials} {Name1}</div>
<div spry:repeat="timetable"
spry:test="displayData('{Name}', ' Fri A 2', '{ChosenName}',
'{Surname}')">{Description}<br
/>{Initials} {Name1}</div>
<!-- SESSION 3 -->
<div spry:repeat="timetable"
spry:test="displayData('{Name}', ' Mon A 3', '{ChosenName}',
'{Surname}')">{Description}<br
/>{Initials} {Name1}</div>
<div spry:repeat="timetable"
spry:test="displayData('{Name}', ' Tue A 3', '{ChosenName}',
'{Surname}')">{Description}<br
/>{Initials} {Name1}</div>
<div spry:repeat="timetable"
spry:test="displayData('{Name}', ' Wed A 3', '{ChosenName}',
'{Surname}')">{Description}<br
/>{Initials} {Name1}</div>
<div spry:repeat="timetable"
spry:test="displayData('{Name}', ' Thu A 3', '{ChosenName}',
'{Surname}')">{Description}<br
/>{Initials} {Name1}</div>
<div spry:repeat="timetable"
spry:test="displayData('{Name}', ' Fri A 3', '{ChosenName}',
'{Surname}')">{Description}<br
/>{Initials} {Name1}</div>
<!-- SESSION 4 -->
<div spry:repeat="timetable"
spry:test="displayData('{Name}', ' Mon A 4', '{ChosenName}',
'{Surname}')">{Description}<br
/>{Initials} {Name1}</div>
<div spry:repeat="timetable"
spry:test="displayData('{Name}', ' Tue A 4', '{ChosenName}',
'{Surname}')">{Description}<br
/>{Initials} {Name1}</div>
<div spry:repeat="timetable"
spry:test="displayData('{Name}', ' Wed A 4', '{ChosenName}',
'{Surname}')">{Description}<br
/>{Initials} {Name1}</div>
<div spry:repeat="timetable"
spry:test="displayData('{Name}', ' Thu A 4', '{ChosenName}',
'{Surname}')">{Description}<br
/>{Initials} {Name1}</div>
<div spry:repeat="timetable"
spry:test="displayData('{Name}', ' Fri A 4', '{ChosenName}',
'{Surname}')">{Description}<br
/>{Initials} {Name1}</div>
<!-- SESSION 5 -->
<div spry:repeat="timetable"
spry:test="displayData('{Name}', ' Mon A 5', '{ChosenName}',
'{Surname}')">{Description}<br
/>{Initials} {Name1}</div>
<div spry:repeat="timetable"
spry:test="displayData('{Name}', ' Tue A 5', '{ChosenName}',
'{Surname}')">{Description}<br
/>{Initials} {Name1}</div>
<div spry:repeat="timetable"
spry:test="displayData('{Name}', ' Wed A 5', '{ChosenName}',
'{Surname}')">{Description}<br
/>{Initials} {Name1}</div>
<div spry:repeat="timetable"
spry:test="displayData('{Name}', ' Thu A 5', '{ChosenName}',
'{Surname}')">{Description}<br
/>{Initials} {Name1}</div>
<div spry:repeat="timetable"
spry:test="displayData('{Name}', ' Fri A 5', '{ChosenName}',
'{Surname}')">{Description}<br
/>{Initials} {Name1}</div>
<!-- SESSION 6 -->
<div spry:repeat="timetable"
spry:test="displayData('{Name}', ' Mon A 6', '{ChosenName}',
'{Surname}')">{Description}<br
/>{Initials} {Name1}</div>
<div spry:repeat="timetable"
spry:test="displayData('{Name}', ' Tue A 6', '{ChosenName}',
'{Surname}')">{Description}<br
/>{Initials} {Name1}</div>
<div spry:repeat="timetable"
spry:test="displayData('{Name}', ' Wed A 6', '{ChosenName}',
'{Surname}')">{Description}<br
/>{Initials} {Name1}</div>
<div spry:repeat="timetable"
spry:test="displayData('{Name}', ' Thu A 6', '{ChosenName}',
'{Surname}')">{Description}<br
/>{Initials} {Name1}</div>
<div spry:repeat="timetable"
spry:test="displayData('{Name}', ' Fri A 6', '{ChosenName}',
'{Surname}')">{Description}<br
/>{Initials} {Name1}</div>
<!-- SESSION 7 -->
<div spry:repeat="timetable"
spry:test="displayData('{Name}', ' Mon A 7', '{ChosenName}',
'{Surname}')">{Description}<br
/>{Initials} {Name1}</div>
<div spry:repeat="timetable"
spry:test="displayData('{Name}', ' Tue A 7', '{ChosenName}',
'{Surname}')">{Description}<br
/>{Initials} {Name1}</div>
<div spry:repeat="timetable"
spry:test="displayData('{Name}', ' Wed A 7', '{ChosenName}',
'{Surname}')">{Description}<br
/>{Initials} {Name1}</div>
<div spry:repeat="timetable"
spry:test="displayData('{Name}', ' Thu A 7', '{ChosenName}',
'{Surname}')">{Description}<br
/>{Initials} {Name1}</div>
<div spry:repeat="timetable"
spry:test="displayData('{Name}', ' Fri A 7', '{ChosenName}',
'{Surname}')">{Description}<br
/>{Initials} {Name1}</div>
</div> -
Variable Data Set has only one thing but the error asks for more
Hi -
I have a PSD (CS5, Win7) where I've defines 5 layers with variable names. One of them is the Heading layer with the variable named varHeading. The Data Set I'm importing is very simple in that only the Heading layer will change. I'm gone over the construction of this text file a number of times but when I try to Import it as a Data Set I get the error:
"Could not parse the file contents as a data set. There were not enough variable names in the first line of the text file."
varHeading
"BABY CAR SEAT AECC0841"
"BABY CAR SEAT AECC0842"
"BABY CAR SEAT AECC0843"
"BABY CAR SEAT AECC0844"
"BABY CAR SEAT AECC0845"
My first question is - does the number of variables stored in the PSD file have to match the variable names in the first line of the text file??
If NOT - can someone please help me figure out what I'm doing wrong?
TIA your expert input.
j2When you say you have defined 5 layers with variable names are you trying to attach the data set, that has one column, to each one at the same time? Did you create the data set in the Photoshop Data Sets Dialog ? The error that you are getting pops up when you have defined more variables than you have in your data set.
-
Open data set and close data set
hi all,
i have some doubt in open/read/close data set
how to transfer data from internal table to sequential file, how we find sequential file.
thanks and regards
chaitanyaHi Chaitanya,
Refer Sample Code:
constants: c_split TYPE c
VALUE cl_abap_char_utilities=>horizontal_tab,
c_path TYPE char100
VALUE '/local/data/interface/A28/DM/OUT'.
Selection Screen
SELECTION-SCREEN BEGIN OF BLOCK b1 WITH FRAME TITLE text-001.
PARAMETERS : rb_pc RADIOBUTTON GROUP r1 DEFAULT 'X'
USER-COMMAND ucomm, "For Presentation
p_f1 LIKE rlgrap-filename
MODIF ID rb1, "Input File
rb_srv RADIOBUTTON GROUP r1, "For Application
p_f2 LIKE rlgrap-filename
MODIF ID rb2, "Input File
p_direct TYPE char128 MODIF ID abc DEFAULT c_path.
"File directory
SELECTION-SCREEN END OF BLOCK b1.
AT SELECTION-SCREEN ON VALUE-REQUEST FOR p_f1.
*-- Browse Presentation Server
PERFORM f1000_browse_presentation_file.
AT SELECTION-SCREEN ON VALUE-REQUEST FOR p_f2.
*-- Browse Application Server
PERFORM f1001_browse_appl_file.
AT SELECTION-SCREEN OUTPUT.
LOOP AT SCREEN.
IF rb_pc = 'X' AND screen-group1 = 'RB2'.
screen-input = '0'.
MODIFY SCREEN.
ELSEIF rb_srv = 'X' AND screen-group1 = 'RB1'.
screen-input = '0'.
MODIFY SCREEN.
ENDIF.
IF screen-group1 = 'ABC'.
screen-input = '0'.
MODIFY SCREEN.
ENDIF.
ENDLOOP.
*& Form f1000_browse_presentation_file
Pick up the filepath for the file in the presentation server
FORM f1000_browse_presentation_file .
CONSTANTS: lcl_path TYPE char20 VALUE 'C:'.
CALL FUNCTION 'WS_FILENAME_GET'
EXPORTING
def_path = lcl_path
mask = c_mask "',.,..'
mode = c_mode
title = text-006
IMPORTING
filename = p_f1
EXCEPTIONS
inv_winsys = 1
no_batch = 2
selection_cancel = 3
selection_error = 4
OTHERS = 5.
IF sy-subrc <> 0.
MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
flg_pre = c_x.
ENDIF.
ENDFORM. " f1000_browse_presentation_file
*& Form f1001_browse_appl_file
Pick up the file path for the file in the application server
FORM f1001_browse_appl_file .
DATA: lcl_directory TYPE char128.
lcl_directory = p_direct.
CALL FUNCTION '/SAPDMC/LSM_F4_SERVER_FILE'
EXPORTING
directory = lcl_directory
filemask = c_mask
IMPORTING
serverfile = p_f2
EXCEPTIONS
canceled_by_user = 1
OTHERS = 2.
IF sy-subrc <> 0.
MESSAGE e000(zmm) WITH text-039.
flg_app = 'X'.
ENDIF.
ENDFORM. " f1001_browse_appl_file
*& Form f1003_pre_file
Upload the file from the presentation server
FORM f1003_pre_file .
DATA: lcl_filename TYPE string.
lcl_filename = p_f1.
IF p_f1 IS NOT INITIAL.
CALL FUNCTION 'GUI_UPLOAD'
EXPORTING
filename = lcl_filename
filetype = 'ASC'
has_field_separator = 'X'
TABLES
data_tab = i_input
EXCEPTIONS
file_open_error = 1
file_read_error = 2
no_batch = 3
gui_refuse_filetransfer = 4
invalid_type = 5
no_authority = 6
unknown_error = 7
bad_data_format = 8
header_not_allowed = 9
separator_not_allowed = 10
header_too_long = 11
unknown_dp_error = 12
access_denied = 13
dp_out_of_memory = 14
disk_full = 15
dp_timeout = 16
OTHERS = 17.
IF sy-subrc <> 0.
MESSAGE s000 WITH text-031.
EXIT.
ENDIF.
ELSE.
PERFORM populate_error_log USING space
text-023.
ENDIF.
ENDFORM. " f1003_pre_file
*& Form f1004_app_file
upload the file from the application server
FORM f1004_app_file .
REFRESH: i_input.
OPEN DATASET p_f2 IN TEXT MODE ENCODING DEFAULT FOR INPUT.
IF sy-subrc EQ 0.
DO.
READ DATASET p_f2 INTO wa_input_rec.
IF sy-subrc EQ 0.
*-- Split The CSV record into Work Area
PERFORM f0025_record_split.
*-- Populate internal table.
APPEND wa_input TO i_input.
CLEAR wa_input.
IF sy-subrc <> 0.
MESSAGE s000 WITH text-030.
EXIT.
ENDIF.
ELSE.
EXIT.
ENDIF.
ENDDO.
ENDIF.
ENDFORM. " f1004_app_file
Move the assembly layer file into the work area
FORM f0025_record_split .
CLEAR wa_input.
SPLIT wa_input_rec AT c_split INTO
wa_input-legacykey
wa_input-bu_partner
wa_input-anlage.
ENDFORM. " f0025_record_split
Reward points if this helps.
Manish -
Hi,
"Report Builder is a report authoring environment for business users who prefer to work in the Microsoft Office environment.
You work with one report at a time. You can modify a published report directly from a report server. You can quickly build a report by adding items from the Report Part Gallery provided by report designers from your organization." - As mentioned
on TechNet.
I wonder how a non-technical business analyst can use Report Builder 3 to create ad-hoc reports/analysis with list of parameters based on other data sets.
Do they need to learn TSQL or how to add and link parameter in Report Builder? then How they can add parameter into a report. Not sure what i am missing from whole idea behind Report builder then?
I have SQL Server 2012 STD and Report Builder 3.0 and want to train non-technical users to create reports as per their need without asking to IT department.
Everything seems simple and working except parameters with list of values e.g. Sales year List, Sales Month List, Gender etc. etc.
So how they can configure parameters based on Other data sets?
Workaround in my mind is to create a report with most of columns and add most frequent parameters based on other data sets and then non-technical user modify that report according to their needs but that way its still restricting users to
a set of defined reports?
I want functionality like "Excel Power view parameters" into report builder which is driven from source data and which is only available Excel 2013 onward which most of people don't have yet.
So how to use Report Builder. Any other thoughts or workaround or guide me the purpose of Report Builder, please let me know.
Many thanks and Kind Regards,
For quick review of new features, try virtual labs: http://msdn.microsoft.com/en-us/aa570323Hi Asam,
If we want to create a parameter depend on another dataset, we can additional create or add the dataset, embedded or shared, that has a query that contains query variables. Then use the option that “Get values from a
query” to get available values. For more details, please see:http://msdn.microsoft.com/en-us/library/dd283107.aspx
http://msdn.microsoft.com/en-us/library/dd220464.aspx
As to the Report Builder features, we can refer to the following articles:http://technet.microsoft.com/en-us/library/hh213578.aspx
http://technet.microsoft.com/en-us/library/hh965699.aspx
Hope this helps.
Thanks,
Katherine Xiong
Katherine Xiong
TechNet Community Support -
Hi,
In release note of 9.1 it is mentioned that :
Display of all OIM User attributes on the Step 3: Modify Connector Configuration page
On the Step 3: Modify Connector Configuration page, the OIM - User data set now shows all the OIM User attributes. In the earlier release, the display of fields was restricted to the ones that were most commonly used.
and
Attributes of the ID field are editable
On the Step 3: Modify Connector Configuration page, you can modify some of the attributes of the ID field. The ID field stores the value that uniquely identifies a user in Oracle Identity Manager and in the target system.
Can anyone please guide me how to get both things as I am getting only few fields of user profile in OIM-USer data set and also not able to modify ID field.
I am using OIM 9.1 on Websphere application server 6.1
ThanksUnfortunately i do not have experience using the SPML generic connector. Have you read through all the documentation pertaining to the GTC?
-Kevin
Maybe you are looking for
-
My iMac crashed and once I recovered it I re-installed all my software including my Adobe Creative Suite 5.5 Design Standard (DVD) but Photoshop simply won't install after 3 or 4 attempts. After the lengthy installation process I get a huge error mes
-
can anyone tell me how to make a payment to adobe. my credit card was out of date. i have now changed this and the correct payment deta bils are now registered. but i still need to make one payment which was for this month but i cant find how to do i
-
ITune 10.4 will not download
I am running 10.3.1.55. Lately when I launch iTunes, it advises that a new version is available (10.4) and asks if I want to download and install. I click yes and nothing happens. iTunes seems to run normally otherwise. The same thing happens if I
-
Update to Adobe Acrobat failed because it said I had a more functional product installed.
I have no idea what more functional product I have installed. Also, when I clicked on my plugin Adobe Acrobat to update, the update said Adobe Reader. Are they the same?
-
Flash Player 10.1 Not Pushing to Win7 Via GPO
We push out Flash Player to all our PCs on our network using Group Policy. Normally this is a relatively simple process where you put the installers (activeX and plugin) on a shared folder, create a new policy and point the Software Installation to t