Mapping Colums
0
I have a requirement which is
Member's relation code: subscriber = '18', Spouse = '01', Child = '19', Other Adult ='34'
of which the Column to supply the information is
RELATIONTOINSURED
and the information on that Column is
M-Man
W-Woman
S-Son
D-Daughter
How do I map both colums to get the above requirement .
Thanks.
why are there two hyphens next to
WHEN 'S' or 'D' Then '19'
If I helped you please mark me as helpful If I answered your question mark as an answer
Similar Messages
-
Colum names mapping from formate file is not working
Hi All,
i tried to import data from text file using BULK INSERT format file option, am able to load the data from file to table but when i change the column name order in the input file it is not inserting according to change column order
BULK INSERT monsanto55_Steelwedge_monsanto_manualBuild_filters.dbo.bulk_test
FROM '\\192.168.97.23\poc\bulk.txt'
WITH( FORMATFILE = '\\192.168.97.23\poc\build.fmt',FIRSTROW = 2)
TABLE COLUMS: P_Id,LastName,FirstName,age,Address,City,no
Format file is :
10.0
7
1 SQLCHAR 0 12 "\t" 1 P_Id ""
2 SQLCHAR 0 255 "\t" 2 LastName SQL_Latin1_General_CP1_CI_AS
3 SQLCHAR 0 255 "\t" 3 FirstName SQL_Latin1_General_CP1_CI_AS
4 SQLCHAR 0 12 "\t" 4 age ""
5 SQLCHAR 0 255 "\t" 5 Address SQL_Latin1_General_CP1_CI_AS
6 SQLCHAR 0 255 "\t" 6 City SQL_Latin1_General_CP1_CI_AS
7 SQLCHAR 0 12 "\r\n" 7 no ""
input data file:
P_Id FirstNameLastName
ageAddress Cityno
1 first
one 11
sanathnagar HYD 5
2 second
two 12
xyz abc
0
3 third
three 20
ameerpet SEC
30
according to mapping the data in table must be like:
P_Id,LastName,FirstName,age,Address,City,no
1 one first 11 sanathnagar HYD 5
2 two second 12 xyz abc 0
3 third three 20 ameerpet SEC 30
but it inserting same as input file format so first name and last name are miss matching
please let me know if you have any idea.
Thanks,Hi All,
i tried to import data from text file using BULK INSERT format file option, am able to load the data from file to table but when i change the column name order in the input file it is not inserting according to change column order
BULK INSERT monsanto55_Steelwedge_monsanto_manualBuild_filters.dbo.bulk_test
FROM '\\192.168.97.23\poc\bulk.txt'
WITH( FORMATFILE = '\\192.168.97.23\poc\build.fmt',FIRSTROW = 2)
TABLE COLUMS: P_Id,LastName,FirstName,age,Address,City,no
Format file is :
10.0
7
1 SQLCHAR 0 12 "\t" 1 P_Id ""
2 SQLCHAR 0 255 "\t" 2 LastName SQL_Latin1_General_CP1_CI_AS
3 SQLCHAR 0 255 "\t" 3 FirstName SQL_Latin1_General_CP1_CI_AS
4 SQLCHAR 0 12 "\t" 4 age ""
5 SQLCHAR 0 255 "\t" 5 Address SQL_Latin1_General_CP1_CI_AS
6 SQLCHAR 0 255 "\t" 6 City SQL_Latin1_General_CP1_CI_AS
7 SQLCHAR 0 12 "\r\n" 7 no ""
input data file:
P_Id FirstNameLastName
ageAddress Cityno
1 first
one 11
sanathnagar HYD 5
2 second
two 12
xyz abc
0
3 third
three 20
ameerpet SEC
30
according to mapping the data in table must be like:
P_Id,LastName,FirstName,age,Address,City,no
1 one first 11 sanathnagar HYD 5
2 two second 12 xyz abc 0
3 third three 20 ameerpet SEC 30
but it inserting same as input file format so first name and last name are miss matching
please let me know if you have any idea.
Thanks,
Hi all,
Thanks for your response, and sorry for wrong question here i need to supply default values to column of
table based on input file so default values of columns will be changed based on input file, and one more thing i don't have any rights to change the table structure my work is to load data from file to table. Is there any chance to supply default values by XMLformat
file instead of .fmt file
or any other scenarios please let me know the possibility. -
Mapping rows to Colums across two tables
Hi All,
I have a doubt as to how this could be done, could you please help me out with this one
i have data in two tables
Table 1
Prod_number Refer_number
C1 A1
C1 A2
C2 B1
C2 B2
C2 B3
C3 D1
C3 D2
C3 D3
C3 D4
C4 E1
Table 2
Prod_number Document
C1 R1
C1 R2
C1 R3
C1 R4
C2 S1
C3 T1
C3 T2
C4 NONE
The Final output should be like Combine the two tables mapping the rows in Table 2 to columns in the final output based on the Prod_number field
Prod_number Refer_number Document_1 Document_2 Document_3 Document_4
C1 A1 R1 R2 R3 R4
C1 A2 R1 R2 R3 R4
C2 B1 S1 - - -
-C2 B2 S1 - - -
C2 B3 S1 - - -
C3 D1 T1 T2 - -
C3 D2 T1 T2 - -
C3 D3 T1 T2 - -
C3 D4 T1 T2 - -
C4 E1 - - - -
Regards,
MaverickMaverick,
This is a standard pivot. In 11g you can use the PIVOT function. In versions prior to 11g you can use this:
SQL> create table table_1 (prod_number,refer_number)
2 as
3 select 'C1', 'A1' from dual union all
4 select 'C1', 'A2' from dual union all
5 select 'C2', 'B1' from dual union all
6 select 'C2', 'B2' from dual union all
7 select 'C2', 'B3' from dual union all
8 select 'C3', 'D1' from dual union all
9 select 'C3', 'D2' from dual union all
10 select 'C3', 'D3' from dual union all
11 select 'C3', 'D4' from dual union all
12 select 'C4', 'E1' from dual
13 /
Tabel is aangemaakt.
SQL> create table table_2 (prod_number,document)
2 as
3 select 'C1', 'R1' from dual union all
4 select 'C1', 'R2' from dual union all
5 select 'C1', 'R3' from dual union all
6 select 'C1', 'R4' from dual union all
7 select 'C2', 'S1' from dual union all
8 select 'C3', 'T1' from dual union all
9 select 'C3', 'T2' from dual union all
10 select 'C4', 'NONE' from dual
11 /
Tabel is aangemaakt.
SQL> set null "-"
SQL> select t1.prod_number
2 , t1.refer_number
3 , max(decode(t2.rn,1,document)) document_1
4 , max(decode(t2.rn,2,document)) document_2
5 , max(decode(t2.rn,3,document)) document_3
6 , max(decode(t2.rn,4,document)) document_4
7 from table_1 t1
8 , ( select t.*
9 , row_number() over (partition by prod_number order by document) rn
10 from table_2 t
11 where document != 'NONE'
12 ) t2
13 where t1.prod_number = t2.prod_number (+)
14 group by t1.prod_number
15 , t1.refer_number
16 /
PR RE DOCU DOCU DOCU DOCU
C1 A1 R1 R2 R3 R4
C1 A2 R1 R2 R3 R4
C2 B1 S1 - - -
C2 B2 S1 - - -
C2 B3 S1 - - -
C3 D1 T1 T2 - -
C3 D2 T1 T2 - -
C3 D3 T1 T2 - -
C3 D4 T1 T2 - -
C4 E1 - - - -
10 rijen zijn geselecteerd.Regards,
Rob. -
I am using the Tech Comm Ste 2, Adobe Robohelp 8.0.2.2 in Windows XP. I assign map IDs manually instead of using automap. I open the Edit Map IDs window. In the right column, under "Topic," I select the topic I want to map. I then click the icon below to open the Context-sensitive Properties for Help Topic window. I select the topic title, without the .htm, and copy it. Then, from the left column, I click the Map File dropdown list and select the HH file to which I want to map the help topic. I then click the icon at the bottom to open the Create/Edit Map ID window. In the first field, I paste the topic ID, and in the second field I enter the map ID I want to assign. Then I click OK.
Usually this places the topic in the correct HH file, highlighted in yellow until I select the topic from the list and then click Assign. For some reason, RH is not adding my topic to the HH file; instead it is adding it to the Project Map file. When I click OK, I get a blank window and the Assign button is disabled. My topic shows up in the Project Map File. The topic displays with the HH files and a .h extension has been added to it.
I can't delete these topics from the Project Map file. I can't assign them map IDs. And this seems to happen randomly. SOMETIMES I can map my topics, but I never know when I can do it and when one of them is going to land in the Project Map file.
Can anyone help?Personally I've never had or heard of this issue. The fact that this appears to be random suggests that the MapFile dropdown may not be displaying the correct map file when you add or assign the mapid. Also check to ensure your project's source files are not located on a network drive as this can cause issues. If this continues perhaps you could give us an image of what you are seeing.
The RoboColum(n)
@robocolumn
Colum McAndrew -
SYNCHRONIZING a MAPPING in OWB 11.2.0.3
Hello folks,
I am running into a weird issue with my OWB Mappings and synchrinization.
Here is what I am experiencing:
1). I update my staging table to add new columns within the design center
2). I go into my corresponding Mapping and try to synchronize (IN BOUND) the the mapping staging table operator so as to sync up the staging table with the new changes before validating my mapping
3). During the synchronization PRE-VIEW operation, I realize that the source to target columns that are to be updated DO NOT MATCH one to one.
I am doing an INBOUND synchronization, by OBJECT NAME, REPLACE
Can someone tell me if I am doing something wrong here please. I am so deperate. I have tried this a few times already.
Mark you that, I tried the OUTBOUND synchronization, the colums match one to one but the new columns are of course showing DELETE
PLEASE HELP ME ME ANY POINTERS.I was up all night as I had a dead line doing this....
Turned out my Mapping was corrupted.
If you do a series of synchronizations by object name, position, id the mapping would end up corrupted.
I had to delete my staging table operator within the mapping, drag and drop and reconnect and now my loads work just fine..
Thanks -
How to add 1 more column in standard portal UWL and map the values.
Hi
I have one issue/requirement, please help me out on that also.
In portal UWL, i want to add one more column TICKET ID COLUMN, and ticket id value I will be putting as work item ID of abap Workflow, so whenever approver opens his portal UWL, in first column i want to show ticket ID say 00012345, so how to add this ticket ID column in standard portal UWL and how to put/map value of work item in that column.
My idea behind this is, when ever say employee wants to know the status about his ticket ID, he can simply ask his manager regarding the ticket status by referring to that ticket ID which manager can easily find in his portal UWL in that extra TICKET ID COLUMN .
Do I have to change anything in SAP inbox also ? Do i have to add 1 more colum in sap R/3 inbox also ? and will adding 1 more colum in sap inbox (R/3 inbox), will create automatically one more ticket ID colum in portal UWL also ?
please let me know , as i do not want to add 1 extra column in R/3 inbox, just i want in portal UWL extra ticket ID column should come and i want to put workitem ID generated at the start of workflow, in that colum in portal UWL
please help me on this.
Thanks...
Edited by: User Satyam on May 29, 2011 6:16 AMHi Satyam,
These are called custom attributes. Here is a powerpoint that may be able to assist you with the documentation that the other poster gave you too.
Always remember too when you make a change on the backend R/3 side, you must reregister your UWL connector. And yes, the column must be available on the backend R/3 side. We can't create on the fly columns in the UWL, that have no reference to the backend system in this case.
Beth Maben
EP - Senior Support Consultant II
AGS Primary Support
Global Support Centre Ireland
Please see the UWL Wiki @
https://www.sdn.sap.com/irj/scn/wiki?path=/display/bpx/uwl+faq *** -
JDO Mapping to same object as parent/child 1:n relationship
Hello,
i'm trying to do a mapping the following way:
Node (1:n) Node where the Node should have the following
accessors:
getChildNodes() : Set
getParentalNode() : Node
I do have the getChildNodes mapping working, but cannot get the parental relation established due to problems with the Node.map file containing the following:
<relationship-field name="parentalNode" multiplicity="one">
<foreign-key name="NODE_TO_PARENTALNODE"
foreign-key-table="TPB_NODE"
primary-key-table="TPB_NODE">
<column-pair foreign-key-column="ID"
primary-key-column="PARENT_ID"/>
</foreign-key>
</relationship-field>
Using this the checker complains about
javax.jdo.JDOFatalUserException: Error in mapping model of class Node: Relationship of field parentalNode: Primary key column PARENT_ID of relationship's foreign key and target class's com.vw.tpb.node.model.Node primary key columns do not match.
Well. When i do a 1:n relation to myself how should i create two primary keys then ?
Anybody some ideas on this one ?
regards, UdoHi Udo,
for the reference from child to parent, the column "ID" should be the primary key colum, whereas the column "PARENT_ID" should be the foreign key column.
Best regards,
Adrian -
JDBC-RFC Scnario mapping problem
Hi friends,
I have a requirement where i need to fetch one single row from database and then post the data into rfc, header+11 rows and amount vallue colume should not be null in that 11 rows.. Please help me how to map to achive this,,Hi thanks a lot replying....
That blog and the suraj said are helpfull ..
see as the blog convays that from source UserSQL_1 it has mapped to user_1 and message1..
ok that fine.. now i require same kind of message1----message11. and these values are depended on UserSql_1 data...for example source: is
A.................../A (source)
B................../B
A......../A (destination)
.a......./a
b....../b (for example : b= A20+ A15)
. k..../k
. B............/B
. a.........../a
k......./k
C............../C
Edited by: vijay Kumar on May 5, 2008 12:06 PM
Edited by: vijay Kumar on May 5, 2008 12:09 PM -
Trying to map between KE30 - Value Field - GL Account
Hi,
Had a question and am unsure what forum to post so I am poting a few places.
I am working with the KE30 Report. My manager has created a report to get to the Contribution margin. When the KE30 Report is run, I get colums such as Labor, Std OH, FX etc.... these are the Value fieds.
The value fields are made up of formulas and these formulas are such as A-B+C.... and A, B and C are made up of various GL Accounts....
The question is what code do I use too see the mapping from GL account all the way to the value field? Can anyone help
Thanks,
MMy issues is Value Field KVMAEK updated for every invoice. This is a duplication
1.In the case of goods produced in-house VPRS and Cost estimate both post to CO-PA.
2.KVMAEK comes from Cost Estimate.
3.In the CO-PA document we have an extra entry to value field VVMAT (in addition to KWMAEK).
kindly advice
Thansk
Sunitha -
RH project's ID doesn't work, mapping's good
Hi all, one of the IDs in my chm file doesn't seem to work.
The ID mapping checks out on the developer's side but when I press F1 on that particular page, my help project's default topic. I'm using 'Adobe RoboHelp 7 HTML' and importing a word document.
Interestingly enough, I've had this problem with two specific IDs. In other words, in some chms, the bad ID may work but then a second ID, which worked fine in previous chms, would become corrupt - and the two IDs switch between themselves. In any one chm, only one topic appears corrupt.
I've had this problem for the last 3 years or so and I've tried a number of things, like starting a new project, using an older project setup that used to work, and I've also tried importing a doc and a docx, neither works.
I'd greatly appreciate any suggstions.
-SteveThanks for the quick response Colum,
There are two IDs giving me problems (screenshots) and in each chm I create, one of the two stops working.
1. SdpUpload:
2. Job_Chain_Query
My project's conversion options are:
From my testing, mapping different topics hasn't helped so that leads me to think that the cause might be somehow connected with the IDs themselves, with dev's side or maybe it's something internal in my RH setup.
I don't know how the IDs can be at fault because they switch between themselves at being corrupted.
On the developer's end, the mapped topic correctly appears when they check these IDs' mappings in the code.
I don't know if it's important, but the other tech writer, who works on an unrelated product and also uses RH 7, hasn't encountered this issue.
If I'm forgetting anything, please let me know and I'll gladly send it over.
- Steve -
I found a few posts on Map IDs, but none of them exactly answer my question, at least not that I can tell. My apologies if this is a duplicate.
I'm using RoboHelp 9.
I am working on a HTMLHelp project with a .chm output. I want to implement dialog-level help. That is, there will be a help button or icon on the window and the user will click that to get help on the window itself.
Let's ignore the question of whether the developers reuse a window, changing the output with variables, which is something I read about on a post I found. Let's assume for a second that the developer will be able to "hook up" the help button to a topic as long as I give her a map file.
As someone noted in a previous topic, auto-map works well the first time you use it, but the question is, how do I automatically get a map ID for a topic I've added?
For example, today I added 20 placeholder topics. The only way I know to get map IDs for a new topic is this:
1. Open Topic Properties (I'm right-clicking on the topic in the Topic List pane and selecting Properties).
2. Click the Advanced tab.
3. Click the Edit Map IDs button.
4. Select the Map File at the top.
5. Make sure that the correct topic is highlighted in the right pane (it's hard to see).
6. CLick the Auto-map button.
Isn't there any way to configure RH so that whenever I create a new topic, RH automatically gives it a map number without me doing anything?
(And am I imagining it, or did RH at one time used to do this? I've used it sporadically since 1995, and I don't remember having to re-open the Properties window. Maybe the window for creating a new topic had a checkbox?)
Colum, I read your article and comments on manually adding the new map IDs. As you can probably tell by the fact that I'm creating placeholder topics, this help file isn't complete yet. Should I just delete the map file, wait until I've finished adding topics, and then auto-number as if I was just starting out?Hi,
Currently, there is no way to assign a map number automatically to a topic when they are created but you can manually select multiple topics and auto-map them in one-go by doing the following steps:
1. Open the Edit Map IDs dialog by doing one of the following:
(a) Open the Project Set-up pod and navigate to the Context-Sensitive Help > Map files folder and double click on a map file to edit it.
(b) Open Topic Properties, click on Advanced tab and then click on Edit Map IDs button.
2. Select the Map File from the Map File drop-down in the Edit Map IDs dialog.
3. Now select all the required topics by pressing Ctrl button and clicking on the topics.
4. Click on Auto-map button.
Thanks,
Aditi -
OBIEE 10g: Problem on fact table mapping
Hi to everyone,
i have this situation:
1) i have 2 fact tables in phisical model (FHRCTR and HCTOT) this two table is the same in the FK columns.
2) the structure is: FHRCTR[month_fk, transaction_type, measure A], HCTOT[month_fk, measure B]. 'transaction_type' is an attribute of the table not dimension.
3) i have created a logical fact table with this two tables for source.
4) each field of logical table is mapped on the corrispetive field on the both phisical table, except for 'transaction_type' column having as source only FHRCTR table.
5) i've created a report with follow colums: month_fk, transaction_type, measure_A, measure_B
The problem is: if i create the report with the columns in point 5, the measure_B returns null values and the measure_A have a correct value.
Insteed if i create a report without 'transaction_type' column the value of Measure_B and Measure_A are correct.
I remind you that: 'transaction_type' in the logical fact table is mapped only on the FHRCTR table because on HCTOT it doesn't exist.
Can you help me to solve this issue?
Thanks832596 wrote:
Can you help me to solve this issue?
ThanksI guess when you set some aggregation rule for 'transaction_type' in a repository then all will be ok (for examplу - min). But you'll get results that you don't expect. -
OCI/SQLT Typecode Mapping - Need Help
Greetings,
I am writing OCI based driver/etl tool with 10.2 instances. I am trying to reduce type conversions b/n export/import databases and also reduce similar conversions at ETL tool(OCI lib) itself. Hence i went out to use best possible external datatype (c/c++) mapping for particular internal type (oracle storage). OCI provides various API's to convert b/n types but, my whole idea is to reduce these conversions as much as possible at any level. So, to keep varchar2(OCI_TYPECODE_VARCHAR2) as varchar2(SQLT_XXX), i am not able to find a conclusive answer. I found this non-agreeing piece of information in OCI Programmers Guide, 10.2 Nov 05 release.
Location 1:
Page 3-6, Table 3-2 (*External Datatypes and Codes*)
Also Here: [http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28395/oci03typ.htm#CEGIEEJI]
Varchar2 maps normally maps to SQLT_CHAR, size(n) as it says.
Location 2: Page 3-27, Table 3-11 (*OCI_TYPECODE to SQLT Mappings*)
Also Here: [http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28395/oci03typ.htm#g454572]
Varchar2 maps to SQLT_VCS, size(n)
Any comment helping me to understand this or pointing where i am going wrong is much appreciated, thanks in advance.you're not getting OCI_TYPECODE_BDOUBLE but SQLT_IBDOUBLE..
OCI will always report you
- SQLT_IBDOUBLE for BINARY_DOUBLE colums
- SQLT_IBFLOAT for BINARY_FLOAT colums
SQLT_BDOUBLE and SQLT_BFLOAT are sql type codes thant you can specify when you define or bind variable.
Once again, certains sql type are meant to be reported by Oracle and others meant to be set by applications to tell OCi how to retrieve the data.
Read again the OCI guide :
BINARY_FLOAT and BINARY_DOUBLE
The BINARY_FLOAT and BINARY_DOUBLE datatypes represent single-precision and double-precision floating point values that mostly conform to the IEEE754 standard for Floating Point Arithmetic.
Prior to the addition of these datatypes, all numeric values in an Oracle database were stored in the Oracle NUMBER format. These new binary floating point types will not replace Oracle NUMBER. Rather, they are alternatives to Oracle NUMBER that provide the advantage of using less disk storage.
These internal types are represented by the following codes:
* SQLT_IBFLOAT for BINARY_FLOAT.
* SQLT_IBDOUBLE for BINARY_DOUBLE.
All the following host variables can be bound to BINARY_FLOAT and BINARY_DOUBLE datatypes:
* SQLT_BFLOAT (native float)
* SQLT_BDOUBLE (native double) -
Mapping failure in Null fields passing
Hi
We are facing some problem in null fields passing to RFC side.. some times the JDBC colums not having the data ,
so that time our map is failing to call RFC.
Please let me know any prevent methodologies for this?
Target side field occurances are 0...1
Regards
rambarkiHi Rambarki,
since your target field is optional, you can map your target only if the value is present in the source, do not map if it is null.
Also are you getting any specific error ?
Have a look at this thread aswell..
JDBC Adapter: Mapping: result contains only null values
Regards
Anand -
Hi,
I have a Map<String,String> which i want to display in a TableView with 2 colums(Key column and Value column). I am able to display the data by creating an ObservableList of objects having the key and value attributes but i want to bind the Map with the TableView. I have not come across anything related to this yet. Please help,
Thanks,
AvinashThis is working for me.
public class Table extends TableView<Map.Entry> {
public Table() {
super();
setEditable(true);
setColumnResizePolicy(TableView.CONSTRAINED_RESIZE_POLICY);
buildColumns();
populate();
private void buildColumns() {
TableColumn<Map.Entry, String> nCol = new TableColumn<Map.Entry, String>("Name");
nCol.setEditable(true);
nCol.setCellFactory(new Callback<TableColumn<Map.Entry, String>, TableCell<Map.Entry, String>>() {
@Override
public TableCell<Entry, String> call(TableColumn<Entry, String> param) {
return new CustomTableCell();
nCol.setCellValueFactory(new Callback<TableColumn.CellDataFeatures<Entry, String>, ObservableValue<String>>() {
@Override
public ObservableValue<String> call(CellDataFeatures<Entry, String> param) {
String key = (String) param.getValue().getKey();
return new SimpleObjectProperty<String>(key);
TableColumn<Map.Entry, String> vCol = new TableColumn<Entry, String>("Value");
vCol.setEditable(true);
vCol.setCellFactory(new Callback<TableColumn<Entry, String>, TableCell<Entry, String>>() {
@Override
public TableCell<Entry, String> call(TableColumn<Entry, String> param) {
return new CustomTableCell();
vCol.setCellValueFactory(new Callback<CellDataFeatures<Entry, String>, ObservableValue<String>>() {
@Override
public ObservableValue<String> call(CellDataFeatures<Entry, String> param) {
String value = (String) param.getValue().getValue();
return new SimpleStringProperty(value);
getColumns().setAll(nCol, vCol);
private void populate() {
Map<String, String> data = new HashMap<String, String>();
data.put("a", "aa");
data.put("b", "bb");
data.put("c", "cc");
data.put("d", "dd");
Set<Entry<String, String>> dataSet = data.entrySet();
ObservableList tableData = FXCollections.observableArrayList(dataSet);
setItems(tableData);
}The binding is still pending.
Maybe you are looking for
-
Development of imported nikon raw images - too strong?
Hi everybody, I have what is probably a basic question for Aperture 3, but couldnt help me otherwise. I am shooting with two Nikon-cameras in RAW-format. I kept noticing, that when I import my images into aperture, when I select the image for the ver
-
I updated my ipod using my old PC and I don't know how to transfer all my itunes from my ipod to the new library on my new pc. can anybody help?
-
Error Msg_ME078 - DELIV. date outside period covered by factory calendar
Dear experts: when i create PO with ME21N, if the delivery date is not within the year of 2010, there will appear an error msg "ME078 - DELIV. date outside period covered by factory calendar". i have checked the factory calendar with SCAL. i know how
-
Audit Browser(Urgent)
Hello All, How can configure Audit browser in OWB 11G? I configured Repository Browser. Reg, S.Gyazuddin.
-
Upgraded Desktop Software from 4.1 to 4.3
After upgrading my desktop software from 4.1 to 4.3 I am experiencing issues with synchronizing, in particular with configuring Microsoft Outlook. Each time I attempt to configure Outlook I receive the following "popup" message; "cannot find system