Cost Data Sets
What is the best way to upload the Cost Data Sets into P4P?
If we choose to use a SQL Load directly, which tables in the database contain the cost information?
PLM for Process provides a web service that does this, in the Cost Services web service. If you are doing this through the database directly, though, the tables it populates are:
- costmessages - used to group cost items into one load
- costitems - the individual spec cost, by cost type, currency, SCRM facility, etc.
Please see the Web Services guide for details about the data expected in these tables. However you will also have to provide a Sequence Number, which should just be the next highest number
Similar Messages
-
the VirtualizingObservableCollection does the following:
Implements the same interfaces and methods as ObservableCollection<T> so you can use it anywhere you’d use an ObservableCollection<T> – no need to change any of your existing controls.
Supports true multi-user read/write without resets (maximizing performance for large-scale concurrency scenarios).
Manages memory on its own so it never runs out of memory, no matter how large the data set is (especially important for mobile devices).
Natively works asynchronously – great for slow network connections and occasionally-connected models.
Works great out of the box, but is flexible and extendable enough to customize for your needs.
Has a data access performance curve so good it’s just as fast as the regular ObservableCollection – the cost of using it is negligible.
Works in any .NET project because it’s implemented in a Portable Code Library (PCL).
The latest package can be found on nugget. Install-Package VirtualizingObservableCollection. The source is on github.Good job, thank you for sharing
Best Regards,
Please remember to mark the replies as answers if they help -
10g: parallel pipelined table func - distributing DISTINCT data sets
Hi,
i want to distribute data records, selected from cursor, via parallel pipelined table function to multiple worker threads for processing and returning result records.
The tables, where i am selecting data from, are partitioned and subpartitioned.
All tables share the same partitioning/subpartitioning schema.
Each table has a column 'Subpartition_Key', which is hashed to a physical subpartition.
E.g. the Subpartition_Key ranges from 000...999, but we have only 10 physical subpartitions.
The select of records is done partition-wise - one partition after another (in bulks).
The parallel running worker threads select more data from other tables for their processing (2nd level select)
Now my goal is to distribute initial records to the worker threads in a way, that they operate on distinct subpartitions - to decouple the access to resources (for the 2nd level select)
But i cannot just use 'parallel_enable(partition curStage1 by hash(subpartition_key))' for the distribution.
hash(subpartition_key) (hashing A) does not match with the hashing B used to assign the physical subpartition for the INSERT into the tables.
Even when i remodel the hashing B, calculate some SubPartNo(subpartition_key) and use that for 'parallel_enable(partition curStage1 by hash(SubPartNo))' it doesn't work.
Also 'parallel_enable(partition curStage1 by range(SubPartNo))' doesn't help. The load distribution is unbalanced - some worker threads get data of one subpartition, some of multiple subpartitions, some are idle.
How can i distribute the records to the worker threads according a given subpartition-schema?
+[amendment:+
Actually the hashing for the parallel_enable is counterproductive here - it would be better to have some 'parallel_enable(partition curStage1 by SubPartNo)'.]
- many thanks!
best regards,
Frank
Edited by: user8704911 on Jan 12, 2012 2:51 AMHello
A couple of things to note. 1, when you use partition by hash(or range) on 10gr2 and above, there is an additional BUFFER SORT operation vs using partition by ANY. For small datasets this is not necessarily an issue, but the temp space used by this stage can be significant for larger data sets. So be sure to check temp space usage for this process or you could run into problems later.
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 8168 | 1722K| 24 (0)| 00:00:01 | | | | | |
| 1 | PX COORDINATOR | | | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10001 | 8168 | 1722K| 24 (0)| 00:00:01 | | | Q1,01 | P->S | QC (RAND) |
| 3 |****BUFFER SORT**** | | 8168 | 1722K| | | | | Q1,01 | PCWP | |
| 4 | VIEW | | 8168 | 1722K| 24 (0)| 00:00:01 | | | Q1,01 | PCWP | |
| 5 | COLLECTION ITERATOR PICKLER FETCH| TF | | | | | | | Q1,01 | PCWP | |
| 6 | PX RECEIVE | | 100 | 4800 | 2 (0)| 00:00:01 | | | Q1,01 | PCWP | |
| 7 | PX SEND HASH | :TQ10000 | 100 | 4800 | 2 (0)| 00:00:01 | | | Q1,00 | P->P | HASH |
| 8 | PX BLOCK ITERATOR | | 100 | 4800 | 2 (0)| 00:00:01 | 1 | 10 | Q1,00 | PCWC | |
| 9 | TABLE ACCESS FULL | TEST_TAB | 100 | 4800 | 2 (0)| 00:00:01 | 1 | 20 | Q1,00 | PCWP | |
-----------------------------------------------------------------------------------------------------------------------------------------------It may be in this case that you can use clustering with partition by any to achieve your goal...
create or replace package test_pkg as
type Test_Tab_Rec_t is record (
Tracking_ID number(19),
Partition_Key date,
Subpartition_Key number(3),
sid number
type Test_Tab_Rec_Tab_t is table of Test_Tab_Rec_t;
type Test_Tab_Rec_Hash_t is table of Test_Tab_Rec_t index by binary_integer;
type Test_Tab_Rec_HashHash_t is table of Test_Tab_Rec_Hash_t index by binary_integer;
type Cur_t is ref cursor return Test_Tab_Rec_t;
procedure populate;
procedure report;
function tf(cur in Cur_t)
return test_list pipelined
parallel_enable(partition cur by hash(subpartition_key));
function tf_any(cur in Cur_t)
return test_list PIPELINED
CLUSTER cur BY (Subpartition_Key)
parallel_enable(partition cur by ANY);
end;
create or replace package body test_pkg as
procedure populate
is
Tracking_ID number(19) := 1;
Partition_Key date := current_timestamp;
Subpartition_Key number(3) := 1;
begin
dbms_output.put_line(chr(10) || 'populate data into Test_Tab...');
for Subpartition_Key in 0..99
loop
for ctr in 1..1
loop
insert into test_tab (tracking_id, partition_key, subpartition_key)
values (Tracking_ID, Partition_Key, Subpartition_Key);
Tracking_ID := Tracking_ID + 1;
end loop;
end loop;
dbms_output.put_line('...done (populate data into Test_Tab)');
end;
procedure report
is
recs Test_Tab_Rec_Tab_t;
begin
dbms_output.put_line(chr(10) || 'list data per partition/subpartition...');
for item in (select partition_name, subpartition_name from user_tab_subpartitions where table_name='TEST_TAB' order by partition_name, subpartition_name)
loop
dbms_output.put_line('partition/subpartition = ' || item.partition_name || '/' || item.subpartition_name || ':');
execute immediate 'select * from test_tab SUBPARTITION(' || item.subpartition_name || ')' bulk collect into recs;
if recs.count > 0
then
for i in recs.first..recs.last
loop
dbms_output.put_line('...' || recs(i).Tracking_ID || ', ' || recs(i).Partition_Key || ', ' || recs(i).Subpartition_Key);
end loop;
end if;
end loop;
dbms_output.put_line('... done (list data per partition/subpartition)');
end;
function tf(cur in Cur_t)
return test_list pipelined
parallel_enable(partition cur by hash(subpartition_key))
is
sid number;
input Test_Tab_Rec_t;
output test_object;
begin
select userenv('SID') into sid from dual;
loop
fetch cur into input;
exit when cur%notfound;
output := test_object(input.tracking_id, input.partition_key, input.subpartition_key,sid);
pipe row(output);
end loop;
end;
function tf_any(cur in Cur_t)
return test_list PIPELINED
CLUSTER cur BY (Subpartition_Key)
parallel_enable(partition cur by ANY)
is
sid number;
input Test_Tab_Rec_t;
output test_object;
begin
select userenv('SID') into sid from dual;
loop
fetch cur into input;
exit when cur%notfound;
output := test_object(input.tracking_id, input.partition_key, input.subpartition_key,sid);
pipe row(output);
end loop;
end;
end;
XXXX> with parts as (
2 select --+ materialize
3 data_object_id,
4 subobject_name
5 FROM
6 user_objects
7 WHERE
8 object_name = 'TEST_TAB'
9 and
10 object_type = 'TABLE SUBPARTITION'
11 )
12 SELECT
13 COUNT(*),
14 parts.subobject_name,
15 target.sid
16 FROM
17 parts,
18 test_tab tt,
19 test_tab_part_hash target
20 WHERE
21 tt.tracking_id = target.tracking_id
22 and
23 parts.data_object_id = DBMS_MView.PMarker(tt.rowid)
24 GROUP BY
25 parts.subobject_name,
26 target.sid
27 ORDER BY
28 target.sid,
29 parts.subobject_name
30 /
XXXX> INSERT INTO test_tab_part_hash select * from table(test_pkg.tf(CURSOR(select * from test_tab)))
2 /
100 rows created.
Elapsed: 00:00:00.14
XXXX>
XXXX> INSERT INTO test_tab_part_any_cluster select * from table(test_pkg.tf_any(CURSOR(select * from test_tab)))
2 /
100 rows created.
--using partition by hash
XXXX> with parts as (
2 select --+ materialize
3 data_object_id,
4 subobject_name
5 FROM
6 user_objects
7 WHERE
8 object_name = 'TEST_TAB'
9 and
10 object_type = 'TABLE SUBPARTITION'
11 )
12 SELECT
13 COUNT(*),
14 parts.subobject_name,
15 target.sid
16 FROM
17 parts,
18 test_tab tt,
19 test_tab_part_hash target
20 WHERE
21 tt.tracking_id = target.tracking_id
22 and
23 parts.data_object_id = DBMS_MView.PMarker(tt.rowid)
24 GROUP BY
25 parts.subobject_name,
26 target.sid
27 /
COUNT(*) SUBOBJECT_NAME SID
3 SYS_SUBP31 1272
1 SYS_SUBP32 1272
1 SYS_SUBP33 1272
3 SYS_SUBP34 1272
1 SYS_SUBP36 1272
1 SYS_SUBP37 1272
3 SYS_SUBP38 1272
1 SYS_SUBP39 1272
1 SYS_SUBP32 1280
2 SYS_SUBP33 1280
2 SYS_SUBP34 1280
1 SYS_SUBP35 1280
2 SYS_SUBP36 1280
1 SYS_SUBP37 1280
2 SYS_SUBP38 1280
1 SYS_SUBP40 1280
2 SYS_SUBP33 1283
2 SYS_SUBP34 1283
2 SYS_SUBP35 1283
2 SYS_SUBP36 1283
1 SYS_SUBP37 1283
1 SYS_SUBP38 1283
2 SYS_SUBP39 1283
1 SYS_SUBP40 1283
1 SYS_SUBP32 1298
1 SYS_SUBP34 1298
1 SYS_SUBP36 1298
2 SYS_SUBP37 1298
4 SYS_SUBP38 1298
2 SYS_SUBP40 1298
1 SYS_SUBP31 1313
1 SYS_SUBP33 1313
1 SYS_SUBP39 1313
1 SYS_SUBP40 1313
1 SYS_SUBP32 1314
1 SYS_SUBP35 1314
1 SYS_SUBP38 1314
1 SYS_SUBP40 1314
2 SYS_SUBP33 1381
1 SYS_SUBP34 1381
1 SYS_SUBP35 1381
3 SYS_SUBP36 1381
3 SYS_SUBP37 1381
1 SYS_SUBP38 1381
2 SYS_SUBP36 1531
1 SYS_SUBP37 1531
2 SYS_SUBP38 1531
1 SYS_SUBP39 1531
1 SYS_SUBP40 1531
2 SYS_SUBP33 1566
1 SYS_SUBP34 1566
1 SYS_SUBP35 1566
1 SYS_SUBP37 1566
1 SYS_SUBP38 1566
2 SYS_SUBP39 1566
3 SYS_SUBP40 1566
1 SYS_SUBP32 1567
3 SYS_SUBP33 1567
3 SYS_SUBP35 1567
3 SYS_SUBP36 1567
1 SYS_SUBP37 1567
2 SYS_SUBP38 1567
62 rows selected.
--using partition by any cluster by subpartition_key
Elapsed: 00:00:00.26
XXXX> with parts as (
2 select --+ materialize
3 data_object_id,
4 subobject_name
5 FROM
6 user_objects
7 WHERE
8 object_name = 'TEST_TAB'
9 and
10 object_type = 'TABLE SUBPARTITION'
11 )
12 SELECT
13 COUNT(*),
14 parts.subobject_name,
15 target.sid
16 FROM
17 parts,
18 test_tab tt,
19 test_tab_part_any_cluster target
20 WHERE
21 tt.tracking_id = target.tracking_id
22 and
23 parts.data_object_id = DBMS_MView.PMarker(tt.rowid)
24 GROUP BY
25 parts.subobject_name,
26 target.sid
27 ORDER BY
28 target.sid,
29 parts.subobject_name
30 /
COUNT(*) SUBOBJECT_NAME SID
11 SYS_SUBP37 1253
10 SYS_SUBP34 1268
4 SYS_SUBP31 1289
10 SYS_SUBP40 1314
7 SYS_SUBP39 1367
9 SYS_SUBP35 1377
14 SYS_SUBP36 1531
5 SYS_SUBP32 1572
13 SYS_SUBP33 1577
17 SYS_SUBP38 1609
10 rows selected.Bear in mind though that this does require a sort of the incomming dataset but does not require buffering of the output...
PLAN_TABLE_OUTPUT
Plan hash value: 2570087774
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 8168 | 1722K| 24 (0)| 00:00:01 | | | | | |
| 1 | PX COORDINATOR | | | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10000 | 8168 | 1722K| 24 (0)| 00:00:01 | | | Q1,00 | P->S | QC (RAND) |
| 3 | VIEW | | 8168 | 1722K| 24 (0)| 00:00:01 | | | Q1,00 | PCWP | |
| 4 | COLLECTION ITERATOR PICKLER FETCH| TF_ANY | | | | | | | Q1,00 | PCWP | |
| 5 | SORT ORDER BY | | | | | | | | Q1,00 | PCWP | |
| 6 | PX BLOCK ITERATOR | | 100 | 4800 | 2 (0)| 00:00:01 | 1 | 10 | Q1,00 | PCWC | |
| 7 | TABLE ACCESS FULL | TEST_TAB | 100 | 4800 | 2 (0)| 00:00:01 | 1 | 20 | Q1,00 | PCWP | |
----------------------------------------------------------------------------------------------------------------------------------------------HTH
David -
Information costing data for maintenance orders
Hi everbody,
Can somebody throw some light on Costing Data for maintenance order for the following functions
1) Costing sheet
2) Costing Variants
3) valuation Varaints.
I would like to know how this setting will be used in the business form the PM point of view.
Thanks in Advance
DP[SAP Help|http://help.sap.com/saphelp_erp60_sp/helpdata/EN/a9/ab7654414111d182b10000e829fbfe/frameset.htm]
-
I am in the process of loading cost data into a v5.2.2 database. We are utilizing a SQL procedure to load the data into the Cost Item table. The extract of the Cost Item table is attached to this.
We are unable to see the Cost Type in the DWB Spec section. Please help!
I have a screenshot, but I do not see anyway to load it into this Thread....I don't see your extract, but this may help.
Cost Items need to be tied to a cross reference, so a cross reference system selection is required before you will see the available cost types.
1. Add cross reference to a specification
2. Make sure there is an entry in the database that associates a cost, currency, facility, type, effective date to that cross reference
3. Make sure to select a cross reference system and currency, then the type drop down should populate.
You can find all the data that is needed in chapter 3 of the 5.2 DWB guide.
Data entered into the cost library must have the following information:
ERP/Cross Reference System—The code associated with the system that
sources the cost data.
Equivalent—The equivalent number for the specification that the cost is being
applied to.
Cost Type—A classification assigned to the cost.
Cost Set (Facility)—The facility that the cost is tied to. The same material can
have different costs across facilities.
Effective Date—The date that the cost information becomes effective in the
library.
UOM—The unit of measure that the cost is singularly specified for.
Cost—The cost value in the currency specified.
Currency—The currency of the entered cost. -
Open data set and close data set
hi all,
i have some doubt in open/read/close data set
how to transfer data from internal table to sequential file, how we find sequential file.
thanks and regards
chaitanyaHi Chaitanya,
Refer Sample Code:
constants: c_split TYPE c
VALUE cl_abap_char_utilities=>horizontal_tab,
c_path TYPE char100
VALUE '/local/data/interface/A28/DM/OUT'.
Selection Screen
SELECTION-SCREEN BEGIN OF BLOCK b1 WITH FRAME TITLE text-001.
PARAMETERS : rb_pc RADIOBUTTON GROUP r1 DEFAULT 'X'
USER-COMMAND ucomm, "For Presentation
p_f1 LIKE rlgrap-filename
MODIF ID rb1, "Input File
rb_srv RADIOBUTTON GROUP r1, "For Application
p_f2 LIKE rlgrap-filename
MODIF ID rb2, "Input File
p_direct TYPE char128 MODIF ID abc DEFAULT c_path.
"File directory
SELECTION-SCREEN END OF BLOCK b1.
AT SELECTION-SCREEN ON VALUE-REQUEST FOR p_f1.
*-- Browse Presentation Server
PERFORM f1000_browse_presentation_file.
AT SELECTION-SCREEN ON VALUE-REQUEST FOR p_f2.
*-- Browse Application Server
PERFORM f1001_browse_appl_file.
AT SELECTION-SCREEN OUTPUT.
LOOP AT SCREEN.
IF rb_pc = 'X' AND screen-group1 = 'RB2'.
screen-input = '0'.
MODIFY SCREEN.
ELSEIF rb_srv = 'X' AND screen-group1 = 'RB1'.
screen-input = '0'.
MODIFY SCREEN.
ENDIF.
IF screen-group1 = 'ABC'.
screen-input = '0'.
MODIFY SCREEN.
ENDIF.
ENDLOOP.
*& Form f1000_browse_presentation_file
Pick up the filepath for the file in the presentation server
FORM f1000_browse_presentation_file .
CONSTANTS: lcl_path TYPE char20 VALUE 'C:'.
CALL FUNCTION 'WS_FILENAME_GET'
EXPORTING
def_path = lcl_path
mask = c_mask "',.,..'
mode = c_mode
title = text-006
IMPORTING
filename = p_f1
EXCEPTIONS
inv_winsys = 1
no_batch = 2
selection_cancel = 3
selection_error = 4
OTHERS = 5.
IF sy-subrc <> 0.
MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
flg_pre = c_x.
ENDIF.
ENDFORM. " f1000_browse_presentation_file
*& Form f1001_browse_appl_file
Pick up the file path for the file in the application server
FORM f1001_browse_appl_file .
DATA: lcl_directory TYPE char128.
lcl_directory = p_direct.
CALL FUNCTION '/SAPDMC/LSM_F4_SERVER_FILE'
EXPORTING
directory = lcl_directory
filemask = c_mask
IMPORTING
serverfile = p_f2
EXCEPTIONS
canceled_by_user = 1
OTHERS = 2.
IF sy-subrc <> 0.
MESSAGE e000(zmm) WITH text-039.
flg_app = 'X'.
ENDIF.
ENDFORM. " f1001_browse_appl_file
*& Form f1003_pre_file
Upload the file from the presentation server
FORM f1003_pre_file .
DATA: lcl_filename TYPE string.
lcl_filename = p_f1.
IF p_f1 IS NOT INITIAL.
CALL FUNCTION 'GUI_UPLOAD'
EXPORTING
filename = lcl_filename
filetype = 'ASC'
has_field_separator = 'X'
TABLES
data_tab = i_input
EXCEPTIONS
file_open_error = 1
file_read_error = 2
no_batch = 3
gui_refuse_filetransfer = 4
invalid_type = 5
no_authority = 6
unknown_error = 7
bad_data_format = 8
header_not_allowed = 9
separator_not_allowed = 10
header_too_long = 11
unknown_dp_error = 12
access_denied = 13
dp_out_of_memory = 14
disk_full = 15
dp_timeout = 16
OTHERS = 17.
IF sy-subrc <> 0.
MESSAGE s000 WITH text-031.
EXIT.
ENDIF.
ELSE.
PERFORM populate_error_log USING space
text-023.
ENDIF.
ENDFORM. " f1003_pre_file
*& Form f1004_app_file
upload the file from the application server
FORM f1004_app_file .
REFRESH: i_input.
OPEN DATASET p_f2 IN TEXT MODE ENCODING DEFAULT FOR INPUT.
IF sy-subrc EQ 0.
DO.
READ DATASET p_f2 INTO wa_input_rec.
IF sy-subrc EQ 0.
*-- Split The CSV record into Work Area
PERFORM f0025_record_split.
*-- Populate internal table.
APPEND wa_input TO i_input.
CLEAR wa_input.
IF sy-subrc <> 0.
MESSAGE s000 WITH text-030.
EXIT.
ENDIF.
ELSE.
EXIT.
ENDIF.
ENDDO.
ENDIF.
ENDFORM. " f1004_app_file
Move the assembly layer file into the work area
FORM f0025_record_split .
CLEAR wa_input.
SPLIT wa_input_rec AT c_split INTO
wa_input-legacykey
wa_input-bu_partner
wa_input-anlage.
ENDFORM. " f0025_record_split
Reward points if this helps.
Manish -
Hi,
"Report Builder is a report authoring environment for business users who prefer to work in the Microsoft Office environment.
You work with one report at a time. You can modify a published report directly from a report server. You can quickly build a report by adding items from the Report Part Gallery provided by report designers from your organization." - As mentioned
on TechNet.
I wonder how a non-technical business analyst can use Report Builder 3 to create ad-hoc reports/analysis with list of parameters based on other data sets.
Do they need to learn TSQL or how to add and link parameter in Report Builder? then How they can add parameter into a report. Not sure what i am missing from whole idea behind Report builder then?
I have SQL Server 2012 STD and Report Builder 3.0 and want to train non-technical users to create reports as per their need without asking to IT department.
Everything seems simple and working except parameters with list of values e.g. Sales year List, Sales Month List, Gender etc. etc.
So how they can configure parameters based on Other data sets?
Workaround in my mind is to create a report with most of columns and add most frequent parameters based on other data sets and then non-technical user modify that report according to their needs but that way its still restricting users to
a set of defined reports?
I want functionality like "Excel Power view parameters" into report builder which is driven from source data and which is only available Excel 2013 onward which most of people don't have yet.
So how to use Report Builder. Any other thoughts or workaround or guide me the purpose of Report Builder, please let me know.
Many thanks and Kind Regards,
For quick review of new features, try virtual labs: http://msdn.microsoft.com/en-us/aa570323Hi Asam,
If we want to create a parameter depend on another dataset, we can additional create or add the dataset, embedded or shared, that has a query that contains query variables. Then use the option that “Get values from a
query” to get available values. For more details, please see:http://msdn.microsoft.com/en-us/library/dd283107.aspx
http://msdn.microsoft.com/en-us/library/dd220464.aspx
As to the Report Builder features, we can refer to the following articles:http://technet.microsoft.com/en-us/library/hh213578.aspx
http://technet.microsoft.com/en-us/library/hh965699.aspx
Hope this helps.
Thanks,
Katherine Xiong
Katherine Xiong
TechNet Community Support -
Hi,
In release note of 9.1 it is mentioned that :
Display of all OIM User attributes on the Step 3: Modify Connector Configuration page
On the Step 3: Modify Connector Configuration page, the OIM - User data set now shows all the OIM User attributes. In the earlier release, the display of fields was restricted to the ones that were most commonly used.
and
Attributes of the ID field are editable
On the Step 3: Modify Connector Configuration page, you can modify some of the attributes of the ID field. The ID field stores the value that uniquely identifies a user in Oracle Identity Manager and in the target system.
Can anyone please guide me how to get both things as I am getting only few fields of user profile in OIM-USer data set and also not able to modify ID field.
I am using OIM 9.1 on Websphere application server 6.1
ThanksUnfortunately i do not have experience using the SPML generic connector. Have you read through all the documentation pertaining to the GTC?
-Kevin -
Costing data in Accounting Tab
Hi,
When I create a task and have a role assignment, it does calculate the costing based on rates defined but as soon as I assign a resource to the role the cost data goes away.
Can anyone help me understand this costing process? It works fine at the task level and role level as far as resources are not assigned to them.
We need to assign resources to use CATS for time booking. It seems I'm missing something here.
Pl. help.
Thx/DSHello DS,
As I have mentioned earlier there are two ways of doing Accounting in cProjects, Task based and role based.
For Task based accounting you need to define Cost / Revenue rate for Task type in customizing. When particular task type is used while defineing a task, in Additional data tab of that task you will get this default cost / revenue rate. Also at project header lever on Additional data tab you will require to define Org. Unit.
There are further two ways to do Role based accounting.
a) Without staffing: Here in customizing you need to define Cost / Revenue rate for particular Role type in define project role type. So when you create a Project role using that project role type, you will get this default cost revenue rate in Costing tab page.
b) With Staffing: Here in Define project role type you need not define Cost / Revenue rate, instead you define the same for a Resource in tcode : BP. So whenever a Project role is staffed with the Resource, accounting is done.
For both scenarios you need to define Org. Unit as mentioned earlier.
Hope this clarifies your doubt.
Regards,
Niraj -
Multiple data sets: a common global dataset and per/report data sets
Is there a way to have a common dataset included in an actual report data set?
Case:
For one project I have about 70 different letters, each letter being a report in Bi Publisher, each one of them having its own dataset(s).
However all of these letters share a common standardized reference block (e.g. the user, his email address, his phone number, etc), this common reference block comes from a common dataset.
The layout of the reference block is done by including a sub-llayout (rtf-file).
The SQL query for getting the dataset of the reference block is always the same, and, for now, is included in each of the 70 reports.
Ths makes maintenance of this reference block very hard, because each of the 70 reports must be adapted when changes to the reference block/dataset are made.
Is there a better way to handle this? Can I include a shared dataset that I would define and maintain only once, in each single report definition?Hi,
The use of the subtemplate for the centrally managed layout, is ok.
However I would like to be able to do the same thing for the datasets in the reports:
one centrally managed data set (definition) for the common dataset, which is dynamic!, and in our case, a rather complex query
and
datasets defined on a per report basis
It would be nice if we could do a kind of 'include dataset from another report' when defining the datasets for a report.
Of course, this included dataset is executed within each individual report.
This possibility would make the maintenance of this one central query easier than when we have to maintain this query in each of the 70 reports over and over again. -
SQL Update a Single Row Multiple Times Using 2 Data Sets
I'm working in tsql and have an issue where I need to do multiple updates to a single row based on multiple conditions.
By Rank_
If the column is NULL I need it to update no matter what the Rank is.
If the Ranks are the same I need it to update in order of T2_ID.
And I need it to use the last updated output.
I've tried using the update statement below but it only does the first update and the rest are ignored. Here is an example of the data sets i'm working w/ and the Desired results. Thanks in advance!
update a
set Middle = case when a.Rank_> b.Rank_ OR a.Middle IS NULL then ISNULL(b.Middle,a.Middle) end,
LName = case when a.Rank_> b.Rank_ OR a.Lname IS NULL then ISNULL(b.LName,a.LName) end,
Rank_ = case when a.Rank_> b.Rank_ then b.Rank_ end
from #temp1 a
inner join #temp2 b on a.fname = b.fname
where b.T2_ID in (select top 100% T2_ID from #temp2 order by T2_ID asc)The Merge clause actually errors because it attempt to update the same record. I think this CTE statement is the closest I've come but I'm still working through it as I'm not too familiar w/ them. It returns multiple rows which I will have to
insert into a temp table to update since the resulting row I need is the last in the table.
;WITH cteRowNumber
AS(
Select DISTINCT
Row_Number() OVER(PARTITION BY a.LName ORDER BY a.LName ASC, a.Rank_ DESC,b.T2ID ASC) AS RowNumber
,a.FName
,a.LName
,b.LName as xLname
,a.MName
,b.MName AS xMName
,a.Rank_
,b.Rank_ AS xRank
,b.T2ID
FROM #temp1 a
inner join #temp2 b
ON a.fname = b.fname
), cteCursor
AS(
Select a.RowNumber,
a.Fname
,a.LName
,a.xLname
,a.MName
,a.xMName
,a.xRank
,a.T2ID
,CASE WHEN a.Rank_ >= a.xRank THEN ISNULL(a.xRank,a.Rank_) else ISNULL(a.Rank_,a.xRank) end AS Alt_Rank_
,CASE WHEN a.Rank_ >= a.xRank THEN ISNULL(a.xMName,a.MName) else ISNULL(a.MName,a.xMName) end AS Alt_MName
,CASE WHEN a.Rank_ >= a.xRank THEN ISNULL(a.xLName,a.lname) else ISNULL(a.LName,a.xlname) end as Alt_Lname
FROM cteRowNumber a
where a.RowNumber = 1
UNION ALL
Select crt.RowNumber
,crt.FName
,crt.LName
,crt.xLname
,crt.MName
,crt.xMName
,crt.xRank
,crt.T2ID
,CASE WHEN Prev.Alt_Rank_ >= crt.xRank THEN ISNULL(crt.xRank,Prev.Alt_Rank_) else ISNULL(Prev.Alt_Rank_,crt.xRank) end AS Alt_Rank
,CASE WHEN Prev.Alt_Rank_ >= crt.xRank THEN ISNULL(crt.xMName,Prev.Alt_MName) else ISNULL(Prev.Alt_MName,crt.xMName) end AS Alt_MName
,CASE WHEN Prev.Alt_Rank_ >= crt.xRank THEN ISNULL(crt.xLName,Prev.Alt_Lname) else ISNULL(Prev.Alt_Lname,crt.xLName) end as Alt_Lname
FROM cteCursor prev
inner join cteRowNumber crt
on prev.fname = crt.fname and prev.RowNumber + 1 = crt.RowNumber
SELECT cte.*
FROM cteCursor cte -
OBIEE 11g BI Publisher; New Data Set Creation Error "Failed to load SQL"
Hi,
I'm trying to create a new SQL data set (from a client machine). I use "query builder" to build the data set. But when I click "OK", it fires the error "Failed to load SQL".
But strangely, if connect to the OBIEE server's desktop and create the data set it works without any issues. I wonder this would be a firewall issue. If so what are the ports I should open.
It's a enterprise installation. And we have already open 9703, 9704, and 9706.
Has anyone came across such a situation?Talles,
First of all you might have more chance of getting a response over in the BIP forum. Other than that all I can think of is: is your MS SQL Server running with mixed mode auth? -
Exception Handling for OPEN DATA SET and CLOSE DATA SET
Hi ppl,
Can you please let me know what are the exceptions that can be handled for open, read, transfer and close data set ?
Many Thanks.HI,
try this way....
DO.
TRY.
READ DATASET filename INTO datatab.
CATCH cx_sy_conversion_codepage cx_sy_codepage_converter_init
cx_sy_file_authority cx_sy_file_io cx_sy_file_open .
ENDTRY.
READ DATASET filename INTO datatab.
End of changes CHRK941728
IF sy-subrc NE 0.
EXIT.
ELSE.
APPEND datatab.
ENDIF.
ENDDO. -
Download using open data set and close data set
can any body please send some sample pgm using open data set and close data set .the data should get downloaded in application server
very simple pgm neededHi Arun,
See the Sample code for BDC using OPEN DATASET.
report ZSDBDCP_PRICING no standard page heading
line-size 255.
include zbdcrecx1.
*--Internal Table To hold condition records data from flat file.
Data: begin of it_pricing occurs 0,
key(4),
f1(4),
f2(4),
f3(2),
f4(18),
f5(16),
end of it_pricing.
*--Internal Table To hold condition records header .
data : begin of it_header occurs 0,
key(4),
f1(4),
f2(4),
f3(2),
end of it_header.
*--Internal Table To hold condition records details .
data : begin of it_details occurs 0,
key(4),
f4(18),
f5(16),
end of it_details.
data : v_sno(2),
v_rows type i,
v_fname(40).
start-of-selection.
refresh : it_pricing,it_header,it_details.
clear : it_pricing,it_header,it_details.
CALL FUNCTION 'UPLOAD'
EXPORTING
FILENAME = 'C:\WINDOWS\Desktop\pricing.txt'
FILETYPE = 'DAT'
TABLES
DATA_TAB = it_pricing
EXCEPTIONS
CONVERSION_ERROR = 1
INVALID_TABLE_WIDTH = 2
INVALID_TYPE = 3
NO_BATCH = 4
UNKNOWN_ERROR = 5
GUI_REFUSE_FILETRANSFER = 6
OTHERS = 7.
WRITE : / 'Condition Records ', P_FNAME, ' on ', SY-DATUM.
OPEN DATASET P_FNAME FOR INPUT IN TEXT MODE.
if sy-subrc ne 0.
write : / 'File could not be uploaded.. Check file name.'.
stop.
endif.
CLEAR : it_pricing[], it_pricing.
DO.
READ DATASET P_FNAME INTO V_STR.
IF SY-SUBRC NE 0.
EXIT.
ENDIF.
write v_str.
translate v_str using '#/'.
SPLIT V_STR AT ',' INTO it_pricing-key
it_pricing-F1 it_pricing-F2 it_pricing-F3
it_pricing-F4 it_pricing-F5 .
APPEND it_pricing.
CLEAR it_pricing.
ENDDO.
IF it_pricing[] IS INITIAL.
WRITE : / 'No data found to upload'.
STOP.
ENDIF.
loop at it_pricing.
At new key.
read table it_pricing index sy-tabix.
move-corresponding it_pricing to it_header.
append it_header.
clear it_header.
endat.
move-corresponding it_pricing to it_details.
append it_details.
clear it_details.
endloop.
perform open_group.
v_rows = sy-srows - 8.
loop at it_header.
perform bdc_dynpro using 'SAPMV13A' '0100'.
perform bdc_field using 'BDC_CURSOR'
'RV13A-KSCHL'.
perform bdc_field using 'BDC_OKCODE'
'/00'.
perform bdc_field using 'RV13A-KSCHL'
it_header-f1.
perform bdc_dynpro using 'SAPMV13A' '1004'.
perform bdc_field using 'BDC_CURSOR'
'KONP-KBETR(01)'.
perform bdc_field using 'BDC_OKCODE'
'/00'.
perform bdc_field using 'KOMG-VKORG'
it_header-f2.
perform bdc_field using 'KOMG-VTWEG'
it_header-f3.
**Table Control
v_sno = 0.
loop at it_details where key eq it_header-key.
v_sno = v_sno + 1.
clear v_fname.
CONCATENATE 'KOMG-MATNR(' V_SNO ')' INTO V_FNAME.
perform bdc_field using v_fname
it_details-f4.
clear v_fname.
CONCATENATE 'KONP-KBETR(' V_SNO ')' INTO V_FNAME.
perform bdc_field using v_fname
it_details-f5.
if v_sno eq v_rows.
v_sno = 0.
perform bdc_dynpro using 'SAPMV13A' '1004'.
perform bdc_field using 'BDC_OKCODE'
'=P+'.
perform bdc_dynpro using 'SAPMV13A' '1004'.
perform bdc_field using 'BDC_OKCODE'
'/00'.
endif.
endloop.
*--Save
perform bdc_dynpro using 'SAPMV13A' '1004'.
perform bdc_field using 'BDC_OKCODE'
'=SICH'.
perform bdc_transaction using 'VK11'.
endloop.
perform close_group.
Hope this resolves your query.
Reward all the helpful answers.
Regards -
What is open data set and close data set
what is open data set and close data set,
how to use the files in sap directories ?hi,
Open Dataset is used to read or write on to application server ... other than that i am not sure that there exists any way to do the same ... here is a short description for that
FILE HANDLING IN SAP
Introduction
Files on application server are sequential files.
Files on presentation server / workstation are local files.
A sequential file is also called a dataset.
Handling of Sequential file
Three steps are involved in sequential file handling
OPEN
PROCESS
CLOSE
Here processing of file can be READING a file or WRITING on to a file.
OPEN FILE
Before data can be processed, a file needs to be opened.
After processing file is closed.
Syntax:
OPEN DATASET <file name> FOR {OUTPUT/INPUT/APPENDING}
IN {TEXT/BINARY} MODE
This statement returns SY_SUBRC as 0 for successful opening of file or 8, if unsuccessful.
OUTPUT: Opens the file for writing. If the dataset already exists, this will place the cursor at the start of the dataset, the old contents get deleted at the end of the program or when the CLOSE DATASET is encountered.
INPUT: Opens a file for READ and places the cursor at the beginning of the file.
FOR APPENDING: Opens the file for writing and places the cursor at the end of file. If the file does not exist, it is generated.
BINARY MODE: The READ or TRANSFER will be character wise. Each time n characters are READ or transferred. The next READ or TRANSFER will start from the next character position and not on the next line.
IN TEXT MODE: The READ or TRANSFER will start at the beginning of a new line each time. If for READ, the destination is shorter than the source, it gets truncated. If destination is longer, then it is padded with spaces.
Defaults: If nothing is mentioned, then defaults are FOR INPUT and in BINARY MODE.
PROCESS FILE:
Processing a file involves READing the file or Writing on to file TRANSFER.
TRANSFER Statement
Syntax:
TRANSFER <field> TO <file name>.
<Field> can also be a field string / work area / DDIC structure.
Each transfer statement writes a statement to the dataset. In binary mode, it writes the length of the field to the dataset. In text mode, it writes one line to the dataset.
If the file is not already open, TRANSFER tries to OPEN file FOR OUTPUT (IN BINARY MODE) or using the last OPEN DATASET statement for this file.
IF FILE HANDLING, TRANSFER IS THE ONLY STATEMENT WHICH DOES NOT RETURN SY-SUBRC
READ Statement
Syntax:
READ DATASET <file name> INTO <field>.
<Field> can also be a field string / work area / DDIC structure.
Each READ will get one record from the dataset. In binary mode it reads the length of the field and in text mode it reads each line.
CLOSE FILE:
The program will close all sequential files, which are open at the end of the program. However, it is a good programming practice to explicitly close all the datasets that were opened.
Syntax:
CLOSE DATASET <file name>.
SY-SUBRC will be set to 0 or 8 depending on whether the CLOSE is successful or not.
DELETE FILE:
A dataset can be deleted.
Syntax:
DELETE DATASET <file name>.
SY-SUBRC will be set to 0 or 8 depending on whether the DELETE is successful or not.
Pseudo logic for processing the sequential files:
For reading:
Open dataset for input in a particular mode.
Start DO loop.
Read dataset into a field.
If READ is not successful.
Exit the loop.
Endif.
Do relevant processing for that record.
End the do loop.
Close the dataset.
For writing:
Open dataset for output / Appending in a particular mode.
Populate the field that is to be transferred.
TRANSFER the filed to a dataset.
Close the dataset.
Regards
Anver
if hlped pls mark points
Maybe you are looking for
-
i have a sony dsc-t77 (pink) camera that i purchased in janurary. i tried using a card reader to get photos off the memory stick duo and my new MBP doesn't even recognize that there is something plugged in. when i put an SD card into the reader, it w
-
My iPad will not show up in iTunes but my iPod does
Hi there, my iPad will not show up under Devices on iTunes but my iPod is showing up ok. Any help please??
-
Any ideas? ipod accessory port no longer plays audio
audio works fine from headphone jack, but if i plug the cable from my ipod-ready deck into the ipod it will charge and i can browse songs, but i can't hear any music. if i plug my iTrip in (purchased before i had the stereo deck) the same thing happe
-
Help with removing text from background
Hi, I'm pretty new to this, but I'm trying to figure out how to remove text from a background. My problem is that when I use tools to grab the text, it only selects part of it because the pixels of the text are actually different colors as it blends
-
Need Urgent Help on Currency Conersion Routine
Dear Experts, I am new to BW routines concept. I have to do currency conversion on a key figure, if local currency is not in GBP, i have to convert it to GBP(Cube1 has local currency as GBP and cube2 has local currency as EURO, i need to load data fr