COMPLICATED LOGIC
Provide me with an sql query:
with t1 as(select 140 id1, 1118441113 PCE1, to_date('2011-03-10','YYYY-MM-DD') TIME1, 783 S21, 1 ORS1, 308140 SOKE1, 21 VOE1 from dual union all --r(11)(1)
select 140, 1118441113, to_date('2011-03-10','YYYY-MM-DD'), 783, 2, 308140, 7 from dual union all --r(21)(1)
select 140, 1118441113, to_date('2011-03-10','YYYY-MM-DD'), 783, 2, 308140, 21 from dual union all --r(21)(2)
select 140, 1118441113, to_date('2011-03-10','YYYY-MM-DD'), 783, 3, 308140, 21 from dual union all --r(31)(1)
select 160, 1130306097, to_date('2011-03-10','YYYY-MM-DD'), 0, 1, 308140, 21 from dual union all --r(11)(2)
select 170, 1130306730, to_date('2011-03-10','YYYY-MM-DD'), 783, 1, 307409, 21 from dual union all --r(11)(3)
select 190, 1118441113, to_date('2011-03-10','YYYY-MM-DD'), 0, 1, 308140, 21 from dual union all --r(11)(4)
select 370, 1130305975, to_date('2011-03-10','YYYY-MM-DD'), 785, 1, 307272, 24 from dual union all --r(11)(5)
select 370, 1130305975, to_date('2011-03-10','YYYY-MM-DD'), 785, 2, 307272, 24 from dual union all --r(21)(3)
select 370, 1130305975, to_date('2011-03-10','YYYY-MM-DD'), 785, 3, 307272, 24 from dual union all --r(31)(2)
select 380, 1130305997, to_date('2011-03-10','YYYY-MM-DD'), 0, 1, 307273, 13 from dual union all --r(11)(6)
select 380, 1130305997, to_date('2011-03-10','YYYY-MM-DD'), 0, 2, 307273, 13 from dual union all --r(21)(4)
select 400, 7777777, to_date('2011-03-10','YYYY-MM-DD'), 666, 1, 300000, 19 from dual union all --r(11)(7)
select 410, 9999, to_date('2011-03-10','YYYY-MM-DD'), 777, 3, 300055, 19 from dual union all --r(31)1(2)
select 579, 8999, to_date('2011-03-10','YYYY-MM-DD'), 456, 2, 300000, 67 from dual UNION ALL --r(21)(5)
select 589, 8199, to_date('2011-03-10','YYYY-MM-DD'), 496, 1, 300000, 67 from dual UNION ALL --r(11)(8)
select 589, 8199, to_date('2011-03-10','YYYY-MM-DD'), 496, 2, 300000, 67 from dual ) --r(21)(6)
with t2 as(
SELECT 370 ID2, 7777777 PCE2, to_date('2011-03-09','yyyy-mm-dd') TIME2, 666 S22 , 1 OPINION, 300000 SOKE2, 19 VOE2, 0 span from dual union all
SELECT 370, 7777777, to_date('2011-03-09','yyyy-mm-dd'), 666, 2, 300000, 19, 0 from dual union all
SELECT 410, 9999, to_date('2011-03-08','yyyy-mm-dd'), 777, 1, 300055, 19, 0 from dual union all
SELECT 410, 9999, to_date('2011-03-08','yyyy-mm-dd'), 777, 2, 300055, 19, 0 from dual union all
SELECT 410, 9999, to_date('2011-03-09','yyyy-mm-dd'), 777, 3, 300055, 19, 1 from dual union all
SELECT 579, 8999, to_date('2011-03-07','yyyy-mm-dd'), 456, 1, 300000, 67, 0 from dual union all
SELECT 579, 8999, to_date('2011-03-07','yyyy-mm-dd'), 456, 2, 300000, 67, 0 from dual union all
SELECT 579, 8999, to_date('2011-03-09','yyyy-mm-dd'), 456, 3, 300000, 67, 2 from dual UNION ALL
SELECT 529, 8979, to_date('2011-03-09','yyyy-mm-dd'), 476, 1, 300000, 67, 0 from dual UNION ALL
SELECT 529, 8979, to_date('2011-03-09','yyyy-mm-dd'), 476, 2, 300000, 67, 0 from dual UNION ALL
SELECT 529, 8979, to_date('2011-03-09','yyyy-mm-dd'), 476, 3, 300000, 67, 0 from dual UNION ALL
SELECT 529, 8979, to_date('2011-03-09','yyyy-mm-dd'), 476, 4, 300000, 67, 0 from dual UNION ALL
SELECT 429, 89129, to_date('2011-03-09','yyyy-mm-dd'), 476, 1, 300000, 67, 0 from dual UNION ALL
SELECT 429, 89129, to_date('2011-03-09','yyyy-mm-dd'), 476, 2, 300000, 67, 0 from dual UNION ALL
SELECT 229, 89159, to_date('2011-03-08','yyyy-mm-dd'), 476, 1, 300000, 67, 0 from dual UNION ALL --(r12)(1)
SELECT 229, 89159, to_date('2011-03-08','yyyy-mm-dd'), 476, 2, 300000, 67, 0 from dual UNION ALL
SELECT 229, 89159, to_date('2011-03-09','yyyy-mm-dd'), 476, 3, 300000, 67, 1 from dual ) --r(12)(2)I want this result:
140 1118441113 2011-03-10 783 1 308140 21 0 -r(11)(1)
140 1118441113 2011-03-10 783 2 308140 21 0 -r(11)(1)
140 1118441113 2011-03-10 783 3 308140 7 0 --r(21)(1)
140 1118441113 2011-03-10 783 3 308140 21 0 --r(21)(2)
140 1118441113 2011-03-10 783 4 308140 21 0--r(31)(1)
160 1130306097 2011-03-10 0 1 308140 21 0 --r(11)(2)
160 1130306097 2011-03-10 0 2 308140 21 0 --r(11)(2)
170 1130306730 2011-03-10 783 1 307409 21 0 --r(11)(3)
170 1130306730 2011-03-10 783 2 307409 21 0 --r(11)(3)
190 1118441113 2011-03-10 0 1 308140 21 0 --r(11)(4)
190 1118441113 2011-03-10 0 2 308140 21 0 --r(1)(4)
370 1130305975 2011-03-10 785 1 307272 24 0 --r(11)(5)
370 1130305975 2011-03-10 785 2 307272 24 0 --r(11)(5)
370 1130305975 2011-03-10 785 3 307272 24 0 --r(21)(3)
370 1130305975 2011-03-10 785 4 307272 24 0 --r(31)(2)
380 1130305997 2011-03-10 0 1 307273 13 0 --r(11)(6)
380 1130305997 2011-03-10 0 2 307273 13 0 --r(11)(6)
380 1130305997 2011-03-10 0 3 307273 13 0 --r(21)(4)
400 7777777 2011-03-10 666 1 300000 19 0 --r(11)(7)
400 7777777 2011-03-10 666 2 300000 19 0 --r(11)(7)
410 9999 2011-03-10 777 4 300055 19 2 --r(31)(2)
579 8999 2011-03-10 456 3 300000 67 3 --r(21)(5)
589 8199 2011-03-10 496 1 300000 67 0 --r(11)(8)
589 8199 2011-03-10 496 2 300000 67 0 --r(11)(8)
589 8199 2011-03-10 496 3 300000 67 0 --r(21)(6)
429 89129 2011-03-09 476 2 300000 67 1 --(r62)(1)
229 89159 2011-03-09 476 3 300000 67 2 --(r62)(2)Logic: in the two tables the common columns are id1=id2 and pce1=pce2
r({rule 1}{set1))(row no)
2)from table1(for a set of id1 and pce1)[first set]
rule 1:for every record with ors =1 two records will be formed with new ors 1 and 2
rule2 :for every record with ors=2 record will be formed with new ors=3
rule 3: for every record with ors=3 recors will be formed with new ors=4
for every set of id 1 and pce1 span will be calculated as [TIME1(ors=2)-time1(ors=1)],[TIME1(ors=3)-time1(ors=1)], [TIME1(ors=4)-time1(ors=1)]
2)
from table2: [second set]
rule({rule1}{set2})(row no))
rule 1:
if record with ors2=1 and 2 is present then only pick record with ors1=2 and and ors=3 if present from table1 and converted as described in point 1.
if record with ors2=1,2 and 3 is present then only pick record with ors1=2 and and ors=3 if present from table1 and converted as described in point 1.
span calculation for the output in this scenario time1(ors1= new 2,3 4) - time2(ors2=1)
++
rule 6:[set2]
if a record with ors2=2 and 3 only present but not ors2=4 for a set of value in tables 2 we need to add a row in the output for the date in table1 with same ors2 value if the record is not present in table1 and
the span need to calculated by same formula.
the set will be completed in table2 if ors1,2,3 and 4 all are present.
Thanks in advance...
Edited by: BluShadow on 10-May-2011 08:09
tidied up a little and added {noformat}{noformat} tags to the code and data
Can you describe in more general terms what you are trying to solve? Your example is written in such a way that it is hard to follow (in fact I lost interest after the second sentence).
Who defines those rule sets and why are they defined this way? Btw. there is a rule extension framework in the database, but I doubt that you will need it here. But just in case here is the link to the "rule manager and expression filter Developers Guide": http://download.oracle.com/docs/cd/E11882_01/appdev.112/e14919/exprn_intro.htm#EXPRN001
Edited by: Sven W. on May 10, 2011 10:38 AM
Similar Messages
-
Complex logic with receiver determiantion
What is the difference between Context object and X-path Expression. how we could add any complicated logics in receiver determination for e.g. I need to check the 1st 4 characters of a particular field and have to decide the receiver at runtime.
I can't do this with Context object and X-path expressionHi,
xpath is the complete path of any field of the message structure.
Context object is a shorter mode of referring to the xpath. This is used when the field is under a deep heirarchy.
U can create context objetc in IR and assign it to a particular field.
Conext Objects are nothing but a short way to reference XPATH.
When you have a deep nested XPATH, and you need to use it in multiple locations it can become tricky and so in your Ir you create a conext object to refer to the XPATH.
Creating Conext Objects --> Quite Simple.
1. Create a new context object --> Right Click on Context Object --> New --> and then give the type of the conext element. Integer, char ,etc.
2. Now, go to the Message Interafce and you will find the column Context object . To the corresponding XML element , give your context object name
You can do it with Enhanced Receiver Determination.
Below are useful links.
Enhanced Receiver Determination
http://help.sap.com/saphelp_nw70/helpdata/en/43/a5f2066340332de10000000a11466f/frameset.htm
Enhanced (Mapping-Based) Interface Determination
http://help.sap.com/saphelp_nw70/helpdata/en/43/a5f2066340332de10000000a11466f/frameset.ht
Please reward points if it helps
Thanks
Vikranth -
Parallel run of the same function from multiple jobs
Hello, everyone!
I have a function which accepts a date range, reads invoices from a partitioned by date table and writes output to a partitioned by invoice table. Each invoice can have records only with one date, so both tables may have one invoice only in one partition, i.e. partitions do not overlap. Function commits after processing each date. The whole process was running about 6 hrs with 46 million records in source table.
We are expecting source table to grow over 150 million rows, so we decided to split it into 3 parallel jobs and each job will process 1/3 of dates, and, as a result, 1/3 of invoices.
So, we call this function from 3 concurrent UNIX jobs and each job passes its own range of dates.
What we noticed, is that even if we run 3 jobs concurrently, they do not run this way! When 1st job ends after 2 hrs of run, the number of commited rows in the target table is equal to the number of rows inserted by this job. When 2nd job ends after 4 hrs of run, the number of rows in the target table is equal the summary of two jobs. And the 3rd job ends only after 6 hrs.
So, instead of improving a process by splitting it into 3 parallel jobs we ended up having 3 jobs instead of one with the same 6 hrs until target table is loaded.
My question is - How to make it work? It looks like Oracle 11g is smart enough to recognize, that all 3 jobs are calling the same function and execute this function only once at the time. I.e. it looks like only one copy of the function is loaded into the memory at the same even if it called by 3 different sessions.
The function itself has a very complicated logic, does a lot of verifications by joining to another tables and we do not want to maintain 3 copies of the same code under different names. And beside this, the plan is that if with 150 mln rows we will have a performance problem, then split it to more concurrent jobs, for example 6 or 8 jobs. Obviously we do not want to maintain so many copies of the same code by copying this function into another names.
I was monitoring jobs by quering V$SESSION and V$SQLAREA ROWS_PROCESSED and EXECUTIONS and I can see, that each job has its own set of SID's (i.e. runs up to 8 parallel processes), but number of commited rows is always eqal to the number of rows from the 1st job, then 2nd+1st, etc. So, it looks like all processes of 2nd and 3rd jobs are waiting until 1st one is done.
Any ideas?OK, this is my SQL and results (some output columns are ommited as irrelevant)
SELECT
TRIM ( SESS.OSUSER ) "OSUser"
, TRIM ( SESS.USERNAME ) "OraUser"
, NVL(TRIM(SESS.SCHEMANAME),'------') "Schema"
, SESS.AUDSID "AudSID"
, SESS.SID "SID"
, TO_CHAR(SESS.LOGON_TIME,'HH24:MI:SS') "Sess Strt"
, SUBSTR(SQLAREA.FIRST_LOAD_TIME,12) "Tran Strt"
, NUMTODSINTERVAL((SYSDATE-TO_DATE(SQLAREA.FIRST_LOAD_TIME,'yyyy-mm-dd hh24:mi:ss')),'DAY') "Tran Time"
, SQLAREA.EXECUTIONS "Execs"
, TO_CHAR(SQLAREA.ROWS_PROCESSED,'999,999,999') "Rows"
, TO_CHAR(TRAN.USED_UREC,'999,999,999') "Undo Rec"
, TO_CHAR(TRAN.USED_UBLK,'999,999,999') "Undo Blks"
, SQLAREA.SORTS "Sorts"
, SQLAREA.FETCHES "Fetches"
, SQLAREA.LOADS "Loads"
, SQLAREA.PARSE_CALLS "Parse Calls"
, TRIM ( SESS.PROGRAM ) "Program"
, SESS.SERIAL# "Serial#"
, TRAN.STATUS "Status"
, SESS.STATE "State"
, SESS.EVENT "Event"
, SESS.P1TEXT||' '||SESS.P1 "P1"
, SESS.P2TEXT||' '||SESS.P2 "P2"
, SESS.P3TEXT||' '||SESS.P3 "P3"
, SESS.WAIT_CLASS "Wait Class"
, NUMTODSINTERVAL(SESS.WAIT_TIME_MICRO/1000000,'SECOND') "Wait Time"
, NUMTODSINTERVAL(SQLAREA.CONCURRENCY_WAIT_TIME/1000000,'SECOND') "Wait Concurr"
, NUMTODSINTERVAL(SQLAREA.CLUSTER_WAIT_TIME/1000000,'SECOND') "Wait Cluster"
, NUMTODSINTERVAL(SQLAREA.USER_IO_WAIT_TIME/1000000,'SECOND') "Wait I/O"
, SESS.ROW_WAIT_FILE# "Row Wait File"
, SESS.ROW_WAIT_OBJ# "Row Wait Obj"
, SESS.USER# "User#"
, SESS.OWNERID "OwnerID"
, SESS.SCHEMA# "Schema#"
, TRIM ( SESS.PROCESS ) "Process"
, NUMTODSINTERVAL(SQLAREA.CPU_TIME/1000000,'SECOND') "CPU Time"
, NUMTODSINTERVAL(SQLAREA.ELAPSED_TIME/1000000,'SECOND') "Elapsed Time"
, SQLAREA.DISK_READS "Disk Reads"
, SQLAREA.DIRECT_WRITES "Direct Writes"
, SQLAREA.BUFFER_GETS "Buffers"
, SQLAREA.SHARABLE_MEM "Sharable Memory"
, SQLAREA.PERSISTENT_MEM "Persistent Memory"
, SQLAREA.RUNTIME_MEM "RunTime Memory"
, TRIM ( SESS.MACHINE ) "Machine"
, TRIM ( SESS.TERMINAL ) "Terminal"
, TRIM ( SESS.TYPE ) "Type"
, SQLAREA.MODULE "Module"
, SESS.SERVICE_NAME "Service name"
FROM V$SESSION SESS
INNER JOIN V$SQLAREA SQLAREA
ON SESS.SQL_ADDRESS = SQLAREA.ADDRESS
and UPPER(SESS.STATUS) = 'ACTIVE'
LEFT JOIN V$TRANSACTION TRAN
ON TRAN.ADDR = SESS.TADDR
ORDER BY SESS.OSUSER
,SESS.USERNAME
,SESS.AUDSID
,NVL(SESS.SCHEMANAME,' ')
,SESS.SID
AudSID SID Sess Strt Tran Strt Tran Time Execs Rows Undo Rec Undo Blks Sorts Fetches Loads Parse Calls Status State Event P1 P2 P3 Wait Class Wait Time Wait Concurr Wait Cluster Wait I/O Row Wait File Row Wait Obj Process CPU Time Elapsed Time Disk Reads Direct Writes Buffers Sharable Memory Persistent Memory RunTime Memory
409585 272 22:15:36 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITED SHORT TIME PX Deq: Execute Reply sleeptime/senderid 200 passes 2 0 Idle 0 0:0:0.436000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 7 21777 22739 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409585 203 22:30:01 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq Credit: send blkd sleeptime/senderid 268566527 passes 1 qref 0 Idle 0 0:0:0.9674000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 25 124730 4180 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409585 210 22:30:01 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq Credit: send blkd sleeptime/senderid 268566527 passes 1 qref 0 Idle 0 0:0:0.11714000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 24 124730 22854 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409585 231 22:30:01 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq Credit: send blkd sleeptime/senderid 268566527 passes 1 qref 0 Idle 0 0:0:0.4623000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 46 21451 4178 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409585 243 22:30:01 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITED SHORT TIME PX qref latch function 154 sleeptime 13835058061074451432 qref 0 Other 0 0:0:0.4000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 35 21451 3550 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409585 252 22:30:01 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq Credit: send blkd sleeptime/senderid 268566527 passes 1 qref 0 Idle 0 0:0:0.19815000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 49 21451 22860 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409585 273 22:30:01 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq Credit: send blkd sleeptime/senderid 268566527 passes 1 qref 0 Idle 0 0:0:0.11621000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 22 124730 4182 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409585 277 22:30:01 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING db file parallel read files 20 blocks 125 requests 125 User I/O 0 0:0:0.242651000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 39 21451 4184 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409585 283 22:30:01 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq Credit: send blkd sleeptime/senderid 268566527 passes 1 qref 0 Idle 0 0:0:0.2781000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 42 21451 3552 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409585 295 22:30:01 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq Credit: send blkd sleeptime/senderid 268566527 passes 1 qref 0 Idle 0 0:0:0.24424000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 40 21451 22862 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409585 311 22:30:01 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq Credit: send blkd sleeptime/senderid 268566527 passes 1 qref 0 Idle 0 0:0:0.15788000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 31 21451 22856 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409586 242 22:15:36 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITED KNOWN TIME PX Deq: Execute Reply sleeptime/senderid 200 passes 1 0 Idle 0 0:0:0.522344000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 28 137723 22736 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409586 192 22:29:20 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq Credit: send blkd sleeptime/senderid 268566527 passes 1 qref 0 Idle 0 0:0:0.14334000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 31 21462 4202 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409586 222 22:29:20 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq Credit: send blkd sleeptime/senderid 268566527 passes 1 qref 0 Idle 0 0:0:0.16694000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 37 21462 4194 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409586 233 22:29:20 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq Credit: send blkd sleeptime/senderid 268566527 passes 1 qref 0 Idle 0 0:0:0.7731000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 44 21462 4198 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409586 253 22:29:20 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING db file parallel read files 21 blocks 125 requests 125 User I/O 0 0:0:0.792518000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 39 21462 4204 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409586 259 22:29:20 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq Credit: send blkd sleeptime/senderid 268566527 passes 1 qref 0 Idle 0 0:0:0.2961000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 35 21462 4196 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409586 291 22:29:20 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq Credit: send blkd sleeptime/senderid 268566527 passes 1 qref 0 Idle 0 0:0:0.9548000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 35 21462 4200 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409587 236 22:15:36 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq: Table Q Normal sleeptime/senderid 200 passes 2 0 Idle 0 0:0:0.91548000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 25 124870 22831 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409587 207 22:30:30 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq: Execution Msg sleeptime/senderid 268566527 passes 3 0 Idle 0 0:0:0.644662000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 43 21423 4208 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409587 241 22:30:30 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq: Execution Msg sleeptime/senderid 268566527 passes 3 0 Idle 0 0:0:0.644594000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 47 21423 4192 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409587 297 22:30:30 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING db file parallel read files 20 blocks 109 requests 109 User I/O 0 0:0:0.793261000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 12 21316 4206 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448Here I found one interesting query http://www.pythian.com/news/922/recent-spike-report-from-vactive_session_history-ash/
But it does not help me -
SSRS 2008 R2 - Add moving average to column group
I have a column group of dollar amounts. The row is a year/month. I would like to add a moving average column to the right of the last 6 months. My SQL Server data source is already complex enough and I'd really prefer not to add a column
there. Is there anything I can do within the report itself? Some way to reference the previous records in a matrix?
Thank you!Hi mateoc15,
According to your description, you have a matrix in your report. Now you want to calculate the average value of last 6 month. Right?
In Reporting Service, we can put custom code into report to deal with complicated logic. Add one more column/row inside of group and call the functions defined in custom code. For your requirement we modified Robert’s code to achieve your goal. We tested
your case in our local environment with sample data. Here are steps and screenshots for your reference:
Put the custom code into report:
Private queueLength As Integer = 6
Private queueSum As Double = 0
Private queueFull As Boolean = False
Private idChange As String=""
Dim queue As New System.Collections.Generic.Queue(Of Integer)
Public Function CumulativeQueue(ByVal currentValue As Integer,id As String) As Object
Dim removedValue As Double = 0
If idChange <> id then
ClearQueue()
idChange = id
queueSum = 0
queueFull = False
CumulativeQueue(currentValue,id)
Else
If queue.Count >= queueLength Then
removedValue = queue.Dequeue()
End If
queueSum += currentValue
queueSum -= removedValue
queue.Enqueue(currentValue)
If queue.Count < queueLength Then
Return Nothing
ElseIf queue.Count = queueLength And queueFull = False Then
queueFull = True
Return queueSum / queueLength
Else
Return (queueSum) / queueLength
End If
End If
End Function
public function ClearQueue()
Dim i as Integer
Dim n as Integer = Queue.Count-1
for i=n To 0 Step-1
queue.Dequeue()
next i
End function
Add one more row inside of group, call the function defined in custom code.
Save and preview. It looks like below:
Reference:
Moving or rolling average, how to?
If you have any question, please feel free to ask.
Best Regards,
Simon Hou (Pactera) -
Microsoft could win in mobile...
If Sun doesn't deal with some major issues in J2ME.
Given the current situation with J2ME, I think Microsoft may be able to pull off a repeat of what happened in the browser wars ... but this time it's the J2ME vendors that are making it happen.
Here are the problems with Java handsets (J2ME) we are seeing.
1. The handset emulators are almost useless for testing mobile apps. As a developer you have no choice but to build and test on every handset you intend to sell for. It's not only that stuff that works on the Emulator doesn't work on a handset, sometimes it's stuff that doesn't work on the Emulator but DOES work on a handset.
2. The extensions (for example, Camera functions) are not always standardized and are often kept secret by the handset manufactuers.
3. Some handset companies simply refuse to support developers.
EXAMPLE: An inquiry into the camera functions on a LG Handset (for a camera based game) produced this response.
Thank you for inquiring of LG Electronics. Unfortunately, LG cannot supply you with this information as it is proprietary information. Furthermore, LG does not support nor provide SDKs for our phones. For further assistance with your request, you may want to contact the service providers that carry LG phones. Please feel free to contact us with any additional questions or concerns. Thank you again for contacting LG Electronics.
4. SUN is NOT verifying compliance of the handsets and their associated J2ME implimentatioms with the standards.
5. Even within the standard, the VM's are broken:
I've been involved as a producer and designer of games for A VERY LONG TIME (TRS-80 anyone?) and have been working in mobile since 2000.
J2ME Portability? This must be a definition of portability I was previously unaware of. If you don't test on every specific handset, even within a family ... your code may not work. We have seen commerical products for specific handsets that failed, in many cases because the publisher assumed that testing on one handset in a family of handsets was sufficent. I can assure you that it is not.
What have programmers seen? Here's some examples:
a. Character array out of bounds exceptions behave differently on different phones. That's right, something as simple as when an exception is thrown showing different behavior on different phones. According to the J2ME standard, the String.substring(int beginIndex,int endIndex) method throws an IndexOutOfBoundsException "if the begin index is negative, or end index is larger than the length of this String object, or beginIndex is larger than endIndex."
This works in practice on the Nokia 3650. However, on the Motorola i95cl the IndexOutOfBoundsException is thrown when beginIndex is EQUAL TO OR LARGER than endIndex. This does not conform the standard and causes code to run differently on the two phones.
b. Nested loops failing. We fixed a problem on a relatively new phone (the Nokia 6600) by removing a nested loop in a Java game. Problem did NOT occur on the Nokia 3650. The J2ME was actually skipping the execution of the inner loop at times. The problem did not occur on other series 60 handsets.
c. Broken memory managers. Failure of GC to regain space from discarded objects.
d. HTTP I/O functionality is unreliable. Standard tricks such as closing an http connection to force a timeout don't work on most devices. You have to monitor I/O processes to kill threads when they freeze.
... and there's so much more.
Then there's the issue of the specification itself. Would the folks who designed MIDP 1.0 had talked to someone, anyone in the game business. If you need to detect button press you're going to be device specific. The key-codes for the cursor pad, number pad, vary from handset to handset. Yes, MIDP 1.0 has "game actions" that are supposed to abstract the phones so that you don't rely on all phones having arrow keys and fire buttons but this actually causes more problems than it solves. Example: on the Nokia 3650 a call to getGameAction(keyCode) may identify the key as UP. However the keyCode parameter may also be the ID for the 2 key. Now you have to inject complicated logic to distinguish whether one should interpret it as a "2" or as UP. Then there's stuff like trying to draw individual pixels
MIDP 2.0 looks promising, but the MIDP 2.0 handsets we have are showing the same "write once, test everywhere" characteristics we have become familiar with in MIDP 1.0
So what about Microsoft? Microsoft is all about "developers, developers, developers" ... unless someone from SUN et. al. gets serious about this, they could lose their market advantage. Programmers tell me that the mobile development suite from Microsoft works well, and that most of the I/O issues they have with J2ME don't occur in the Microsoft suite.
Here's a thought ... how about a compatibility suite and certification. That is a series of programs that test a large range of the functions .. with user interactions to insure the proper graphics and UI interfaces.
I've seen this situation before (broken implimentations) ... back in 1989 with CD-ROM on PC's and the MSCDEX drivers. We (Activision at that time) created a test suite, contacted every CD-ROM company and provided the results so th drivers could be fixed.
Now what I have been told, in regards to the KVM's, is that it's pretty much a hands off deal by Sun. Handset companies just buy off the shelf VM's that kinda work.
If you wonder why there aren't a slew of multiplayer games or network apps .. this is the main reason. I'd estimate that over half the expense in building "The Dozens"tm (Mobile Multiplayer Game) has been in working out how to deal with the "lock ups" in HTTP I/O and other issues.
I think there's time to fix this for MIDP 2.0. Sun or someone needs to create a set of rigorious test suites that really determine if the VM meets the spec, and in particular exercises the I/O and memory management aspects of the J2ME VM. You can't blame Microsoft for these problems, it is entirely self-inflicted.
William Volk CEO, Bonus Mobile Entertainmentdo you think anybody really interrests your opinion
about OSS java?
OOS-Java means to give control out of SUNs hands, if
a jvm is not standard it may not be called "java", so
no problem in means of compatibility there.
HAVE YOU EVER READ SUN's LICENSE? ITS DISGUSTING,
ANOYING, JUST CRAP.
That what OSS means, not incompatibilities and
100.000 forks.
But you are just telling everybody what you think -
nobody is interrested in. I am sure you have NEVER
read the license under which java stands, haven't
you??i don't need to read license, i know it's fine for me .. we were not talking about that anyways -
SSRS multi value parameter expansion invalidates query syntax
These results from from running a demo solution and profiling the server.
--- here is the text of the query saved in the report's design ---
select *
from test_ssl
where [who] in(case when @P1 = 'all' then [who] else @P1 end)
and [recid] in(case when @P2 = 'all' then [recid] else @P2 end)
Executing the report works when @P1 = ‘all’, @P2 = ‘all’
---------------------------------------- as executed at the server ---------------------------
(@P1 nvarchar(3),@P2 nvarchar(3))
select *
from test_ssl
where [who] in(case when @P1 = 'all' then [who] else @P1 end)
and [recid] in(case when @P2 = 'all' then [recid] else @P2 end)
Executing the report fails when @P1 = (‘kid’,’adult’) & @P2 = ‘all’
An error has occurred during report processing. (rsProcessingAborted)
Query execution failed for dataset 'DataSet1'. (rsErrorExecutingCommand)
For more information about this error navigate to the report server on the local server machine, or enable remote errors
---------------------------------------- as executed at the server ---------------------------
exec sp_executesql
N'select *
from test_ssl
where [who] in(case when N''kid'',N''adult'' = ''all'' then [who] else
N''kid'',N''adult'' end)
and [recid] in(case when @P2 = ''all'' then [recid] else @P2 end)
', N'@P2 nvarchar(3)', @P2=N'all'
SSRS expanded @P1 into its values then substituted @P1 with 'kid','adult' in the query's text, then sent the text to SQL server using sp_executesql().
This substitution invalidates the SQL syntax. The report fails.
For me to implement this simple example with only 2 parameters in separate queries in the report where @P1 &/ @P2 can each separately be {= 'all', or 1 value, or more than 1 value} will take 4 separate queries.
Query1: IF @P1 = all, @P2 = all BEGIN ... query text 1 ... END
Query2: IF @P1 != all, @P2 = all BEGIN ... query text 2 ... END
Query3: IF @P1 = all, @P2 != all BEGIN ... query text 3 ... END
and
Query4: IF @P1 != all, @P2 != all BEGIN ... query text 3 ... END
Each query will have to have a unique WHERE clause.
In my actual work problem some reports take 8 parameters that will be either 'all', 1 value, or multiple values.
This means that the report will have to have 64 queries each with its own unique WHERE clause and each query wrapped with its own IF @P..... = 'all' BEGIN ... query text ... END
Who knows what to do about this issue?
Hello!Hi Stevesl,
I have tested on my local environment and the query is invalid.
As
Jan Pieter Posthuma mentioned that you can just add filter as below in the query, when you set the "Allow multiple values" in the parameter, you will got the "Select All" in the dropdown list, so there is no need to add "all"
again.
select * from test_ssl
where [who] in(@P1) and [recid] in(@P2)
You can also add filter in the dataset, details information below for your reference:
By default we can add many filters and the default logic between these filter is "And" and if you want to add some "Or" filters you can reference to the blog below:
FAQ: How do I implement OR logic or complicated logics for filters in a SSRS report?
Details information about filter in SSRS :
Add a Filter to a Dataset (Report Builder and SSRS)
Because you have a lot of parameters and you can consider add the cascading parameters:
Add Cascading Parameters to a Report (Report Builder and SSRS)
If you still have any problem, please feel free to ask.
Regards
Vicky Liu
Vicky Liu
TechNet Community Support -
Cascaded AND/OR-expressions in smart playlists
Is there some way to generate smart playlists with a little more complicated logical expressions than ANDs and ORs. Like: (Artist="Alice Cooper" AND Album !="Classicks") OR (Artist="AC/DC") ???
I know I can do it using more playlists and then combining them, but I'd like to have this in 1 smart playlists without all those "dummy"-playlists
thx in advanceNo, that is not possible.
I suggest, you ask this to be a feature on this page:
http://www.apple.com/macosx/feedback/
I've done that also. Maybe we'll see this in a future upgrade.
M -
Is MS Word really the best Oracle can come with for designing layouts?
Hi all,
I am returning to XMLP development after first looking at it two years ago. Back then I advised our company not to use it for all report development as MS Word didn't seem robust or flexible enough to generate report layouts.
Now I am working for a client that wants to use XMLP and am trying to develop a fairly simple tabular report but with about 5 levels of grouping. The resulting RTF template therefore has 5 levels of nested tables. Whilst all the data is coming out fine, I am finding it very hard to make the layout look anything like as within Word it is hard to select table cells, visually establish the heirachy of the tables, etc. A simple task like putting the correct borders around cells so they render correctly is very hard to get right. In addition to this I have a number of areas where erroneous white space is being displayed.
Is Oracle seriously going to continue to expect developers to use MS Word as a layout designer or are they going to develop a decent product fit for the task? I am tired of hearing from sales guys, functional consultants and Oracle blurb how easy XMLP is to develop reports in. Yes, it is if the report is simple, but try and develop a more complex layout and it is anything but simple.
Anyone got any advice or experience they can share? Are there any alternative tools on the market? A response from Oracle wouldn't go a miss to help stem my disappointment with both product and company!
Cheers,
Jon.Hi Guys,
forgive me, that I am biased - I am responsible for the BI Publisher tool development.
First one comment regarding the nested table - The code we generate for that is the worst part of the template builder. We already have implemented a new version, which is much better - to be released after 10.1.3.4. I apologize for us not fixing that issue earlier.
Yes, the BI Publisher template builder for Word is different. Naturally it is better for some tasks and maybe weaker for some other ones. Nobody forced the E-Business Suite teams to take up BI/XML Publisher - I was PM in one of the teams so I know for sure - we all decided that we rather use BI Publisher. In our eyes it is a step forward and not 10 steps back.
I was product manager for Oracle Contracts. We tried to create a contract layout with Oracle Reports and it was not pretty. Sure If I create just some data reports all the classical reporting tools are fine (Do you really like these other tools???)
But creating contracts or more business style reports like invoices is MUCH easier with BI Publisher. With the first generation of the tools, it took my 2 hours to convert a customer contract example (in word naturally) into a production ready contract layout. Now, I could probably do it in 15 minutes.
While PL/SQL has great acceptance in the Oracle Community, it is not everyone's favorite - for example Peoplesoft, JD Edwards or Siebel developers can not use PL/SQL.
XSL has its quicks and wish it was a better language, but it is an OPEN standard.
We have migrated about 3000 Oracle reports, and all of the Peoplesoft and JD Edwards reports... I don't think it is a toy. It has the same concept as Java Server Pages and it uses Word for formatting instead of HTML.
True, knowing SQL does not help with BI Publisher reports, but user with a little SQL knowledge can do havoc to a production system. BI Publisher complete separates the responsibilities. SQL for professionals and Word for formatting. I think there are more people out there that know Word than SQL.
Embedding the XSL did not turn out all that straight forward. We are also missing one feature in the standalone, that we had in E-Business Suite and that will be included in the 11g release: Subtemplates.
Subtemplates allow you to write complicated XSL libraries that can be called from within RTF templates. For all of you who need to write complicated logic, it is much better to write the logic into an XSL stylesheet and just call out from within the Word template than coding to complicated logic into the Word document. Again my apologies for us not providing this option earlier.
People also had to learn java for Oracle applications, when they only needed to know PL/SQL before.
For all the people who don't like Microsoft Word, we are offering a web based alternative shortly after the 10.1.3.4 release. This tool may be better suited for data style reports - you will be the judge.
I appreciate your feedback and I open for any suggested improvements.
Thanks,
Klaus
Message was edited by:
KlausFabian -
How to do File Comparison in SAP PI
Hi All,
I have another requirement.
I have two text files, both containing a list of materials. I want to compare file A with file B and add the materials from file B that are not in file A.
For example
Input: File A
15-G
12-B
18-A
18-D
Input: File B
15-J
12-B
19-C
Output: Updated File A
15-G
12-B
18-A
18-D
15-J
19-C
As you can see the material 12-B already exists in file A so it isnu2019t copied across but 15-J and 19-C were copied across. Do you know how I could do this?
Any suggestion.
Thanks,Hi,
The easiest approach is to do as I described above:
Read both files in sender file adapter, use ABAP mapping (should be much easier than graphical one, as some quite complicated logic is required to detect and delete duplicates) and write the results to target file with File Construction Mode = Create in receiver CC.
And use the Additional File(s) feature of the sender channel to get multiple files, as described here in Q4 and Q5:
http://wiki.sdn.sap.com/wiki/display/XI/SenderFileAdapterFrequentlyAsked+Questions
Hope this helps,
Greg -
Can I create a XSL function and use it in the Word Template
I have a complicated logic (complicated if i have to repeat it 400 to 500 times) for determining if I should show a null expression. If the value is null or equal to 0 show N/A else show the value.
If I have to repeat this over and over again and it turns out to need an update this would be a nightmare. But I don't have time to waste on a wild goose chase either. If I have to do something a couple thousand times i better get started basically.
Can I create a function in the word template then call it throughout the word template for each field I have to check?
Have a link to a tutorial that is doing this?If you really want to have a function, you have choice of using subtemplates (either RTF or XSL)
You can check for steps here
http://www.oracle.com/technetwork/middleware/bi-publisher/overview/bip-subtemplate-1-132933.pdf
But if you have functions or use the code directly you anyway need to modify your 400-500 fields.
i.e. either
<?xdoxslt:ifelse(COLUMN='','NA',COLUMN)?>
or <?call:template_name?> -
제품 : ORACLE SERVER
작성날짜 : 2002-04-10
OUTER JOIN 에 대하여
====================
Purpose
Outer join의 효과과 이용방법에 대해 이해한다.
Explanation
1. 개념
다음의 용어에 대해 우선 살펴보자 :
1) outer-join column - symbol(+) 을 사용하는 column 이다 .
예를 들어 EMPNO(+) ,DEPT.DEPTNO(+) 는 outer join column들이다.
2) simple predicate - AND , OR,NOT 을 가지지 않는 단순한 관계표현으로
A=B 의 관계로 표현된다.
3) outer join predicate - 한개 이상의 outer join column 을 갖는 simple
predicate 이다.
2. OUTER JOIN 사용법 - RULES
outer join predicate 는 오직 1 table 의 column 들 만이 outer join
column 으로 이용되어져야 한다. 즉 한 single outer join predicate 의
모든 outer join column 은 모두 같은 table이어야 한다.
이런 취지에서 다음 statement 는 틀린 것이다.
EMP.EMPNO(+) = DEPT.DEPTNO(+)
이것은 두 table 의 outer join column 들이다.
한 predicate 의 한 column 이 outer join column 이면 같은 table 의 모든
column 은 outer join column 이어야 한다.
이 취지에서 다음 문장은 틀린 것이다.
EMP.SAL + EMP.COMM(+) = SALGRADE.HIGH
한 table 의 column 들이 outer join 것과 아닌 것과 outer join 인것으로
섞여있기 때문이다.
predicate 에서 (+) 표시가 붙은 table 은 다른 table 을 direct 하게
outer join 한다. indirect 하게 다른 tabe 을 outer join 한다는 것은
그들 table 자체가 또 outer join 하는 경우이다.
이 경우 한 table 은 direct하게든 indeirect 하게든 자기 자신에게 outer
join 하는 경우는 허용되지 않는다.
다음의 문장은 이런 취지에서 틀린 경우이다.
EMP.EMPNO(+) = PERS.EMPNO
AND PERS.DEPTNO(+) = DEPT.DEPTNO
AND DEPT.JOB(+) = EMP.JOB - circular outer
join relationship
3. OUTER JOIN 실행
주어진 table T 에는 outer join 과 non-outer join 이 있다.
실행시 다음처럼 수행된다.
1) The result of joining all tables mentioned in table T's
outer join predicates is formed ( by recursive application
of this algorithm ).
2) For each row of the result, a set of composite rows is
formed, each consisting of the original row in the
result joined to a row in table T for which the composite
row satisfies all of table T's outer join predicates.
3) If a set of composite rows is the null set, a composite
row is created consisting of the original row in the
result joined to a row similar to those in table T, but
with all values set to null.
4) Rows that do not pass the non-outer join predicates are removed.
This may be summarised as follows. Outer join
predicates ( those with (+) after a column of table T ), are
evaluated BEFORE table T is augmented with a null row. The null
row is added only if there are NO rows in table T that satisfy
the outer join predicates. Non-outer join predicates are
evaluated AFTER table T is augmented with a null row (if needed)
4. OUTER JOIN - RECOMMENDATIONS
Certain types of outer joins in complicated logical
expressions may not be well formed. In general, outer join
columns in predicates that are branches of an OR should be
avoided. Inconsistancies between the branches of the OR can
result in an ambiguous query, and this may not be detected. It
is best to confine outer join columns to the top level of the
'where' clause, or to nested AND's only.
5. OUTER JOIN - ILLUSTRATIVE EXAMPLES
1) Simple Outer Join
SELECT ENAME, LOC
FROM DEPT, EMP
WHERE DEPT.DEPTNO = EMP.DEPTNO(+)
The predicate is evaluated BEFORE null augmentation. If
there is a DEPT row for which there are no EMP rows, then a null
EMP row is concatenated to the DEPT row.
2) Outer Join With Simple Post-Join Predicates
SELECT ENAME, LOC
FROM DEPT, EMP
WHERE DEPT.DEPTNO = EMP.DEPTNO(+)
AND EMP.DEPTNO IS NULL
The second simple predicate is avaluated AFTER null
augmentation, since there is no (+), removing rows which were
not the result of null augmentation and hence leaving only DEPT
rows for which there was no corresponding EMP row.
3) Outer Join With Additional Pre-Join Predicates
SELECT ENAME, LOC
FROM DEPT, EMP
WHERE DEPT.DEPTNO = EMP.DEPTNO(+)
AND 'CLERK' = EMP.JOB(+)
AND EMP.DEPTNO IS NULL
The predicate on EMP.JOB is evaluated at the same time
as the one on EMP.DEPTNO - before null augmentation. As a
result, a null row is augmented to any DEPT row for which there
are no corresponding clerks's in the EMP table. Therefore, this
query displays departments containing no clerks.
Note that it the (+) were omitted from the EMP.JOB
predicate, no rows would be returned. In this case, both the
EMP.JOB and EMP.DEPTNO IS NULL predicates are evaluated AFETR
the outer join, and there can be no rows for which both are
true.I had to put it in a subquery? (if that's what it's called)
SELECT a1.date_field DateAndHour, b1.OR_date, NVL(b1.record_count,0)
FROM MASTER_DATE_TABLE a1,
(SELECT TO_CHAR(b.OR_IN_DTTM,'YYYYMMDDHH24') OR_date, COUNT(*) record_count
FROM hsa_tgt.PICIS_OR b
GROUP BY TO_CHAR(b.OR_IN_DTTM,'YYYYMMDDHH24')) b1
WHERE a1.date_field = b1.OR_date (+)
GROUP BY a1.date_field, b1.OR_date, b1.record_count
HAVING (TO_DATE(a1.date_field,'YYYYMMDDHH24') BETWEEN '01-Jan-2006' AND '31-Jan-2006')
ORDER BY a1.date_field; -
Restrictions for using sql commands and operators in loader control file
Hi ,
It suppose that there is a lot of restrictions and limitations when using sql commands and operators in the loader control files, same as it seems I cannot use (or) when with case statement, also it seems there is certain length for the case,
So guys, what are the common limitations and restrictions to be avoided in the loader control file ?
Your efforts are highly appreciated
AshHi Ash,
if you need to do more complicated logic its better to define the file to be loaded as an external table. You can then use any sql function you like against the external table rather than messing around with what you can and can;t do in a sqlldr control file.
You can use the external_table option of sqldr to generate the definition.
Regards,
Harry
http://dbaharrison.blogspot.com/ -
Let me know the BIW 7.0 flow
Dear All,
I am new to this BIW 7.0, before i was working with 3.0B. Kindly let me know the difference between this two. I could see here in the Modelling area Data Sources, DTP, Transformations and so many areas here. Let me know this flow in detail. Kindly help me in this regard.
Regards
SathiyaHi Sathiya,
1. I want to know the difference between the transformation field in the data source and transformation field in the data target also.
[Anil]: Qn is not that clear, but just in case if you want to know about transformations then below are the links.
Transformations
http://help.sap.com/saphelp_nw2004s/helpdata/en/d5/da13426e48db2ce10000000a1550b0/content.htm
Routines
http://help.sap.com/saphelp_nw2004s/helpdata/en/43/857adf7d452679e10000000a1553f7/content.htm
Difference in Transfer/Update rules routines to Transformations of nw2004s
http://help.sap.com/saphelp_nw2004s/helpdata/en/44/bd9b5be97c112ce10000000a11466f/content.htm
2. Why there is a field in transfer rule in data source tree?
[Anil]: If you are in BI 7.0 you dont need transfer rule. You can map the fields in the data source to the data target with transformations. Right click the data target and choose transformations and give the data source name and you will be guided.
3. What is migrate in the data source tree?
[Anil]: Below are few important points:
If you want to migrate your 3.x DataSource into a SAP NW 2004s BI DataSource, then use the inbound adapter functionality which will mean losing your DataSource, Transfer Rules and PSA. Bear this in mind if there is complicated logic within your transfer rules. You will need to migrate this earlier into a transformation. Then you can do the DataSource migration. This will mean a new PSA table will be generated. There should not be data in your PSA table when you do the migration, as it will be lost.
You do not have to migrate all your update/transfer rules to the new concept. You could still define update/transfer rules in SAP NW 2004s, although this is a little more hidden available via Additional Functions in context menu.
4. What is data transfer process in data source tree?
[Anil]: Here is the Step by step process to create DTP -
http://help.sap.com/saphelp_nw04s/helpdata/en/42/fa50e40f501a77e10000000a422035/content.htm
5. How to load master data is this same like 3.0b??
[Anil]: No, it is not the same. But for loading hierarchy we still follow the 3.x procedure. Below are the steps for loading master data:
1. Create the infoobjects you might need in BI
2. Create your target infoprovider-Infoobject
3 Create a source system
4. Create a datasource or use replicated R/3 datasource
5. Create and configure an Infopackage that will bring your records to the PSA
6. Create a transformation from the datasource to the Infoprovider
7. Create a Data Transfer Process (DTP) from the datasource to the Infoprovider
8. Schedule the infopackage
9. Once successful, run the DTP to get data from PSA to Infoobject
10. Check Infoobject for data
loading Infoobject using Flat-File
http://help.sap.com/saphelp_nw2004s/helpdata/en/43/01ed2fe3811a77e10000000a422035/content.htm
6. If we are taking from more than one data source then how we have to process?
[Anil]: Below are different scenarios in using data source.
http://help.sap.com/saphelp_nw04s/helpdata/en/44/0243dd8ae1603ae10000000a1553f6/content.htm
Assign points if this helps.
Regards,
Anil -
Using HR_INFOTYPE_OPERATION in external subroutine for Dynamic Actions
Hi,
I am calling an external subroutine in the Dynamic Actions of an Infotype. In this external subroutine, I am using HR_INFOTYPE_OPERATION to modify OTHER records of the same Infotype number.
However, when I tried to trigger the Dynamic Actions in PA30, the other infotypes get modified as intended. But when I refreshed the PA30 screen, the changes were reversed back as if the HR_INFOTYPE_OPERATION were not carried out at all. I have COMMIT WORK after the HR_INFOTYPE_OPERATION, refreshed the buffer. But it doesn't seem to work.
My question is: Can i use HR_INFOTYPE_OPERATION in an external subroutine which is called during dynamic actions? As I have some complicated logic, I do not want to embed the coding in the Dynamic Actions. Is there a way for HR_INFOTYPE_OPERATION to work in the external subroutine with the changes being committed to the database?
Thank you.Hi,
I remember the same problem being faced by some of the forum members.
Suresh Datti had replied that "Call the subroutine in a nother program using a SUBMIT statement. This will create two sessions and will update the DB". This was working fine for the users.
Hope you can try this.
Just call a program using SUBMIT statement and code your form routine inside that.
Hope this helps you.
Regards,
Subbu. -
Save for previous version doesn't warn about incompatibilities
I just back-saved some code from LV2012 to LV2009. The code included some conditional indexing. I expected that either:
The code would be converted to a LV2009 functional equivalent without conditional indexing
I would get a warning about the incompatibility
I was disappointed to find that I got neither....
I thought we previously got warnings about incompatibility... Or is my brain superimposing some wishful thinking onto my memories again?
Chris Virgona
Solved!
Go to Solution.Just getting back to this thread after a busy week...
The project that I backsaved included 2 VIs with conditional indexing. Neither had particularly complicated logic and the data types were a scalar and a small flat cluster of standard types. The one interesting thing about it was that both myself and a colleague both noticed the problematic backsave on different PCs and with different revisions of the code.
Anyway, I since tried to reproduce the issue but failed... (Doesn't that always happen straight after a forum post?!) Now when I do a backsave (on similar code, lost track of the original code where I noticed this) I get the build array + case structure + shift register...
BUT there is one problem: The new shift register always seems to be uninitialised. That seems buggy to me... (It definitely produces defective code in my case.) Is it that way by design for some reason??
Chris Virgona
Maybe you are looking for
-
Error while posting INvoice in MIRO *** URGENT****
Hi, System is not allowing me to save an invoice in MIRO which says Reference date and Asset value date in the LINE ITEM are not in date format "//--" though we have left thefields as dafault (and have entered nothing there). These fields are filled
-
I have Mac OSX 10.6.8 - How to remove blocked plugin?
I am getting blocked plugin on my Mac OSX 10.6.8. This is due to adobe update. How can I remove it please?
-
How do you get rid of paused applications in the dock?
Suddenly I have tiny icons in my dock because there are all sorts of invisible paused icons. I can delete them individually, but they come back every startup. I tried the advise in the post "How to permanently delete..." but they didn't work. I don't
-
Hey everyone, I just purchased a 2nd hand mainboard (MSI KT3 Ultra Series, MS-6380E v1.X) but unfortunately I cannot install the chipset drivers from the cd as I do not have the series number required. So I downloaded the 4 in 1 VIA download from MSI
-
How to update Office 365 using SCCM 2012 R2?
Hi, I am using SCCM 2012 R2 and Office 365 ProPlus. At products list (Software Update Point Components Properties) there is not Office 365. Office 365 automatically updates from Internet. I need to know the following: How to disable Internet automati