Trasnformation File..fields mapping..
I am getting some fields from the data field, which I don't need for BPC7NW. For example out of 10 fields 2 field is not required for BPC, but my source system file has these fields.
How to tell the system to ignore this field and load the remaining. because currently my load is failing with message that few of the legacy fields are not mapped.
Appreciate the inputs..
Hi
well the mapping can be done as follows:
lets say u have 5 fields in your source file and u need only 3 out of them....
BPC Fields Source Fields
ID = *COL(1)
CO_AREA = *COL(2)
BUS_AREA = *COL(4)
so in the above example u have mapped only required fields from source to target BPC dimensions.U need not to specify anything to ignore if you do not the map the field then it is automatically ignored so as per the above mapping source columns 3 and 5 are ignored.
Regards
Similar Messages
-
File to IDOC ( need advise on how to map file fields )
Hello friends,
I spent quite some time reading all the helpful blogs and threads regarding File to Idoc scenario. However I had a very basic question ( maybe its trivial as I am just new to XI).
In my scenario I have Bank Master data (in a CSV file) and it does not correspond exactly to Bank Master IDOC structure BANK_CREATE01.
I understand that I will need to use File adapter to get this file into XI and then use an IDOC adapter to send it to R/3. (Pl correct me if I am wrong)
Now when I use file adapter, how will I do the data mapping. I mean do I need to create a structure which corresponds to file fields and then XI will automatically load my file fields into this structure and then I can use this structure to map fields to IDOC.
My only issue seems to be how will I see the FLAT FILE data in XML FORM in XI.
To explain it further my CSV file looks like
"GB,123456,11223344,GBP, London,.."
where
GB corresponds to country,
123456 corresponds to Bank key,
112233445566 corresponds to Bank account,
GBP is currency and so on
So should I create a data type maintaining same sequence of fields above without using any hierarchy like in the IDOC.
If this is not possible then will I need to transform my input file exactly into IDOC structure and then use it.
Hope I have manged to explain it.
Appreciate your help on same.
Thanks
ShirinHello Shrini,
First of all The CSV file has to be changed in to an XML file , To achieve this File Content Conversion has to be done.Once this is done, please make the following changes in R/3 and PI to push Idoc in to PI.
Configuration required at Xi side:
Go to IDX1: configure the port.
Go to IDX2: load the DOC metadata.
Go to SM59: Create RFC destination which points to R3 system this is require in the case where your IDOC is sent to R 3 system,
Configiration required at R3 side:
Maintain Logical System for PI (SALE transaction):
Maintain Partner Profile for XI system(WE20):
Thanks,
Kiran. -
EDI 821 and EDI 823 field mapping file and basic IDOC type and message type
Hi All,
We are facing some issues regarding EDI 821 and EDI 823 file mappings.
We are mapping EDI 821 and EDI 823 transactions into SAP using IDOCs. Currently we are using the below entries
EDI 823 - Lock Box
Basic IDOC type - FINSTA01
Message type - FINSTA
Process Code - FINS
The problem is we are able to get the IDOC into SAP with status red and the various errors were due to not able to create the lockbox entry in SAP and once we got yellow status also but the lock box entry was not created and the errors are like No Conversions,no header etc.
EDI 821 -
Basic IDOC type - PEXR2002
Message type - PAYEXT OR REMADV
Process Code - PEXC OR REMA
We are facing the same problem here also and the internal payment order is not creating in SAP and IDOC was generated with yellow status and red status.
We are trying different combinations but nothing is working so far.
I need the following things to proceed furthur.
1)Are the IDOC , Message and process codes are correct that I am using now for both EDI 821 and EDI 823
2)If those are not correct, can you please let me know the correct entries
3) Please provide me the field mapping if any one of you have worked earlier for both the above IDOC and message type or new ones and we have one field mapping now but if you can send it to me then I can re check it.
4) Do we have to create any configuraion or customizing in SAP to create the IDOC in green status and if so please let me know the customizing steps and procedures for both EDI 821 and EDI 823.
thanks in advance for all your help.
Please let me know if my question is not clear.
Thanks,
Ramesh.Hi Ramesh,
I believe you are using those interfaces with the business partner type as Bank, Whatever idoc type, message type and process code you have used are perfectly correct.
First of all did you enable your bank for EDI, the house bank has to be EDI enabled first then only your idoc's can be processed, talk to your fi functional consultant and he might help you.
Are you can give me the exact error and i can help you as well.
Thanks,
Mahesh. -
PI 7.1 : Taking a input PDF file and mapping it to a hexBinary attribute
Hello All,
We have a requirement which involves taking in an input PDF file and mapping it to a message type with binary attribute and sending it to an R3 system.
Can anyone please detail the steps or point us to the correct documents for setting up the scenario.
The scenario is file to Proxy adapter. The part which we need assitance is pulling up the input pdf and mapping it to binary field.
Thanks.
KiranThanks Praveen,Mayank,Sarvesh and Andreas for your valuable help with the issue.
I was able to successfully pick up the binary PDF file from a file server , encode it using Base 64 and post it to R3.
I used the following code snippet and added the mentioned jar files to create a new jar file which was used as java mapping in the operation mapping.
import com.sap.aii.mapping.api.StreamTransformation;
import com.sap.aii.mapping.api.*;
import com.sap.aii.utilxi.base64.api.*;
import java.io.*;
import java.util.*;
public class Base64EncodingXIStandard implements StreamTransformation{
String fileNameFromFileAdapterASMA;
private Map param;
public void setParameter (Map map)
param = map;
if (param == null)
param = new HashMap();
public static void main(String args[])
Base64EncodingXIStandard con = new Base64EncodingXIStandard();
try
InputStream is = new FileInputStream(args[0]);
OutputStream os = new FileOutputStream(args[1]);
con.execute(is, os);
catch (Exception e)
e.printStackTrace();
public void execute(InputStream inputstream, OutputStream outputstream)
DynamicConfiguration conf = (DynamicConfiguration) param.get("DynamicConfiguration");
DynamicConfigurationKey KEY_FILENAME = DynamicConfigurationKey.create("http://sap.com/xi/XI/System/File","FileName");
fileNameFromFileAdapterASMA = conf.get(KEY_FILENAME);
if (fileNameFromFileAdapterASMA == null)
fileNameFromFileAdapterASMA = "ToBase64.txt";
try
while ((len = inputstream.read(buffer)) > 0)
baos.write(buffer, 0, len);
str = Base64.encode(baos.toByteArray()); //buffer);
outputstream.write("<?xml version=\"1.0\" encoding=\"utf-8\"?><ROOT>".getBytes());
outputstream.write(("<FILENAME>" + fileNameFromFileAdapterASMA + "</FILENAME>").getBytes());
outputstream.write( ("<BASE64DATA>" + str + "</BASE64DATA></ROOT>" ).getBytes());
catch(Exception e)
e.printStackTrace();
byte[] buffer = new byte[1024*5000];
ByteArrayOutputStream baos = new ByteArrayOutputStream();
int len;
String str = null;
I had to do the following configuration settings
1) Create a Sender Comm Channel with Adapter Specific message attributes and Filename checkbox checked.
2) Use the Java Mapping in the Operation mapping.
The scenario is working smoothly with out any issues.
Thanks.
Kiran -
Hi,
Detail:
========
I have a class called CustomerSpecificField extending another class
NameValueTimestamp which contains 3 attibutes name, value, timestamp.
In my package.jdo file when i tried to map my fields, I got the following
exception. the attibutes in the parent class are not at all recognized for
mapping.
JDO Info
========
<class name="CustomerSpecificField">
<extension vendor-name="kodo" key="jdbc-class-map" value="base">
<extension vendor-name="kodo" key="table" value="ADB.UP_CUST_SPEC_FIELD"/>
<extension vendor-name="kodo" key="pk-column" value="ID"/>
</extension>
<field name="name">
<extension vendor-name="kodo" key="jdbc-field-map" value="value">
<extension vendor-name="kodo" key="column" value="NAME"/>
</extension>
</field>
<field name="timestamp">
<extension vendor-name="kodo" key="jdbc-field-map" value="value">
<extension vendor-name="kodo" key="column" value="TIMESTAMP"/>
</extension>
</field>
</class>
Error:
=======
Exception in thread "main" kodo.util.FatalException: java.io.IOException:
org.xml.sax.SAXException:
file:/C:/kodo-jdo-3.1.3/bin/com/bo/package.jdo [Location: Line: 65, C:
37]: Field "timestamp
" is not declared in "CustomerSpecificField".
[java.lang.NoSuchFieldException]
NestedThrowables:
java.io.IOException: org.xml.sax.SAXException:
file:/C:/kodo-jdo-3.1.3/bin/com/bo/package.jd
o [Location: Line: 65, C: 37]: Field "timestamp" is not declared in
"CustomerSpecificField". [java.l
ang.NoSuchFieldException]I have a class called CustomerSpecificField extending another class
NameValueTimestamp which contains 3 attibutes name, value, timestamp.In JDO, each class can only persist fields that it declares. So you
need to make your superclas a persistent class and map it, and reference
it with the persistence-capable-superclass attribute from your subclass. -
File adapter mapping - parser: no data allowed here
Hello,
something really nice:
Scenario picks up a file to move it renamed with timestamp to another destination via FTP protocoll.
File picked up: ok
But in pipeline:
Message in trace:
<Trace level="1" type="T">RuntimeException during appliction Java mapping com/sap/xi/tf/_mm_test_</Trace>
<Trace level="1" type="T">Runtime exception occurred during execution of application mapping program com/sap/xi/tf/_mm_test_: com.sap.aii.utilxi.misc.api.BaseRuntimeException; Fatal Error: com.sap.engine.lib.xml.parser.ParserException: XMLParser: No data allowed here: (hex) 4f, 6e, 6c(:main:, row:1, col:3)</Trace>
Test file is really simple:
Data type: "test" - xsd string,
connected to message type "head"
Message mapping / Interface mapping in Int. Dir. :
Test: OK
RWB: "message successfully transmitted to endpoint < Xi pipeline URL > using connection AFW"
Would be very helpful if someone here has an idea where this comes from!
best regards
Dirk
Search help: file content conversionHello again,
your replies were really helpful to move a step forward. The error message has changed now. I have an xml format now in the payload.
Here is an example:
(Don´t ask about the reason for the naming! )
But now the error message is:
com.sap.aii.utilxi.misc.api.BaseRuntimeException: RuntimeException in Message-Mapping transformation: Cannot produce target element /ns:mt_blob. Check xml instance is valid for source xsd and target-field mapping fulfills requirements of target xsd at com.sap.aii.mappingtool.tf3.AMappingProgram.start
Ok! I changed the receiver channel to conversion too!
recordset structure (same name def. as in sender channel)is added with addheaderline and fieldSeparator.
My understanding: (using of naming)
- incoming message linked to message typ abc
- text conversion to fff (rec,structure) in sender channel
- "re"conversion from fff (from payload) in receiver channel
- the "re"conversion is automatically linked to the message type I am using.
Do you have an idea what I forgot to do or where my misunderstanding is?
Best regards
Dirk -
Field mapping and value mapping - Basic
Hi,
Assuming I have a repository containing a main table with 3 fields
1)Product
2)Description
3)Manufacturer (lookup field)
When i map the source file to the destination repository
1) i map fields product and description of the source file and destination repository.
2) map the manufacturer field between the source file and destination repository.
3)Value map the contents of the manufacturer field between the source and destination.
If i have a source file like this.
Product Description Manufacturer
a a Hindustan lever
b b Hindustan lever
a a P&G
b b P&G
Since i have made the manufacturer field a lookup field, it can hold the lookup values Hindustan lever and P&G.
what happens to prduct ? i have made only a field mapping. Will the values a & b be stored in the destination repository this way?
Is a value mapping required for product as well to hold both the values?
Please help
Thanks,
VigneshHi Vignesh,
You do not have to value map it. Import managerwill automatically assign both the values to product a and b. Basically it is set in the configuration options of the import manager. There is a setting "merge source records based on matching field". If that option is set to yes, both the values will be assigned to product a and b respectively.
Best Regards,
Dheeraj -
Orion-ejb-jar: cmp-field-mapping specs REMOTE home
In the tech. preview documentation, in the EJB dev. guide, and in the section which details the orion-ejb-jar, I noticed something odd about the "cmp-field-mapping" element. It has an attribute called "ejb-reference-home", which is supposed to be "the JNDI location of the fields remote EJB-home if the field is an entity EJBObject or EJBHome".
I would assume this would only be set if this field was specified as a CMR field in the ejb-jar.xml file.
The odd thing is, it specifies the REMOTE home, not the LOCAL home. Is this just a typo?I believe that it is not JDeveloper issue but OC4J one. It does not pick up new orion-ejb-jar.xml if you re-deploy bean(s). It should deploy your orion-ejb-jar.xml when you're deploying yor application for the first time (so there is not orion-ejb-jar.xml in deployments directory). If you want OC4J to pick changes up remove orion-ejb-jar.xml from $OC4J_HOME/application-deployments/<application>/<Bean>.jar/ and then deploy it either manually or with JDeveloper. I hope it will help.
-
LSMW Field Mapping: can't map Batch Input Structure for Session Data
In step 5 Maintain Field Mapping and Conversion Rules, I can not see Batch Input Structure for Session Data Fields.
Can somebody tell what's wrong?
Here's what I see:
Field Mapping and Rule
BGR00 Batch Input Structure for Session Data
Fields
BMM00 Material Master: Transaction Data for Batch InputHi Baojing,
To see structure BGR00 you have to map this structure first with input file structure in step 4 (maintain structure relationship).
Regards
Dhirendra -
LSMW No fields in field mapping
I created a project to Create Material Basic data. In Maintain Field Mapping and Conversion Rules, I dont see any fields. What could I have done wrong. I am using Batch input recording.
Did you assign source structure to target structure ,if not assign them and see the results.
Step 1 - Maintain Object Attributes
Recording is done for the transaction
Step 2 - Maintain Source Structures
Maintain your source structures
Step 3 - Maintain Source Fields
Maintain source fields under the source structure
Step 4 - Maintain Field Mapping and Conversion Rules
Map the source structure fields to target structure fields
Step 5 - Specify file
Here you specify the file path to the source structure
Thanks
Seshu -
Maintain Field Mapping and Conversion Rules//LSMW
Hello Friends,
I want to add new fields in the step.no.5(Maintain Field Mapping and Conversion Rules).
Indetail i'm going to upload the GL balances, for DR and CR line item fields are same so system is not accepting the same field value, so i have added 1 for the CR line item fields like in the below example.
BSEG-WRBTR(Dr line item)
BSEG-WRBTR1(Cr line item)
but BSEG-WRBTR1(Cr line item) field not displaying in the step.no.5 to mapping to source field.
please let me know the solution for this.
thanks
swapna.Hi,
I would like to ask few questions.
1. Are you using batch input recording or using any program for uploading. (through LSMW)
2. Whether all your debit or credit line items are same every transactions. I believe they should be same, because you are uploading the balances.
You should not have two similar fields for example, if it is WMBTR, then again WMBTR should not be there, it should WMBTR1 and WMBTR2. Make sure you have done the field mapping properly. When you have done the field mapping all the fileds must have been mapped. If any one of the fields are not mapped, then it will not be uploaded.
Please see the following LSMW sample guide:
http://www.scmexpertonline.com/downloads/SCM_LSMW_StepsOnWeb.doc
Maintain Object Attributes Do the recording - Make sure that you do not have two fields with the similar name. If you have two fields with the same name double click on the field name and add1 or 2 to differentiate between field names. Just Copy those fields and descriptions in excel sheet, delete the blank lines, then in excel data => text to columns, your field names and descriptions will be now in two columns. Copy them, then put your cursor on the next sheet, then edit => Paste Special => Transpose, all the columns will become your rows. Now your file structure is ready. Maintain Source Structures Give some unique structure name and description Maintain Source Fields Here you add the fields that are being used in EXCEL first sheet, just copy them and make all the fields as C (Constant) and then give length of 60 for all fields. Maintain Structure Relations Though structure relations are already created just go to this step, click on edit, then click on create structure relation, just accept the message stating that the structure relation has already been created. Maintain Field Mapping and Conversion Rules Do the field mapping for all the fields, all the fields willl be stretched and you will see five rows against each row. In case if there is any row that has NOT stretched means, there is something wrong in the mapping. Maintain Fixed Values, Translations, User-Defined Routines There is nothing to be done at this step. You can simply ignore this. Specify Files Make you must have saved your excel file as .txt (before saving make sure you have copied data from sheet2 to sheet 3 and then sheet 3 is saved at tab delimited file. Text (Tab delimited) Select your file, make SURE that you have select "TABULATOR" radio button and say OK. Assign Files Go to this step and click on Create assignment button and accept the message and say ok. Read Data Remove two check boxes and just click on execute button. See the log. Make sure you have number of entries (lines) in your excel file are matching with this. Display Read Data Display data give 1 to 999 lines and double click on one of the line and see whether fields are mapped correctly are not. Convert Data Execute and see the log match the number of entries. Display Converted Data Display converted data, give 1 to 999 and double click on one of the line and see whether fields are mapped correctly or not. Create Batch Input Session Check on Keep Batch Input sessions check box, then execute. If you select that check box, even after execution it will be there and you can analyze what happened. Run Batch Input Session (Takes you to SM35) Go to SM35 select the batch and click on process button (execute), make sure you have checked right hand side first three check boxes and FOREGROUND (because you want to save what it is creating) Say OK Keep on press ENTER on your key board in order to move the session further. If you follow these steps along with the guide, surely you should be successful. There may be small difference between the file and what I have explained but ultimately the purpose is same. Hope this is useful and let me know in case if you have any issues.
Regards, Ravi -
Hello,
I am doing Material master load using LSMW in 3loads.
First load is for basic data with matnr , mtart,mbrsh,maktx,meins,bismt,mtart.
wht should be the format of my input file, it should be like in the step " maintain field mapping and conversions" , because when in see the data in "display read data " and " display converted data ", its loading the records but its deleting some parts of it and after executing the last step its giving errors.
If LSMW is success, where can i see the output?
I would appreciate if some can help me , i am doing LSMW for the first time.
the problem is with field mapping and file format?
thanks.
Raghuthe file format is same as the field mapping and conversions, still its picking the values from the file in different order . I am saving the excel sheet in tab delimited form and mentioned it in the LSMW.if anyone done Material master load , if would appreciate if u can give the steps and file format..
points are rewarded..
thanks. -
Hi,
I am new to using EMIGALL. I have one query:
Where does the field mapping takes place in EMIGALL? Like in LSMW, we maintain the field mapping by mapping source and target fields. How is the same done in EMIGALL?
Thanks,
Sachin.Hi,
please see the very detailed description of the auto-structure and the migration object. The process differs for each structure, but in most cases it is quite similiar to the LSWM. You can defibe pre- and postprocessing code where additional conversion might be done, the fields which will be expected in the input file by the EMIGALL are listed when you click on the auto-structure.
KR
Uwe -
Hi,
I'm trying to use the LSMW. What is the fastest way to maintain the source field and relationship?? Is it possible to define the source field and length in a text file and load it in the LSMW??
Regards,
KitHi Kit,
u can create source fields in faster way.
menupath
In the 3rd step i.e., Maintain source fields
source fields -> table maintenance
here u can fill all the fields at a time.
u can also maintain the fields in a file and copy and paste them in the Table maintenance .
In 5th step i.e., Maintain Field Mapping and Conversion Rules
Extras -> Auto field Mapping
Now u get the auto field mapping setting pop-up.
click ok.
Now u Get the auto field mapping proposel pop-up.
Here check Target field and Source field and click on Accept proposel push button for all the fields.
with this the source and target field mapping are done automatically.
reward if helpfull
raam -
CMP Bean's Field Mapping with oracle unicode Datatypes
Hi,
I have CMP which mapps to RDBMS table and table has some Unicode datatype such as NVARCHAR NCAR
now i was woundering how OC4J/ oracle EJB container handles queries with Unicode datatypes.
What i have to do in order to properly develope and deploy CMP bean which has fields mapped onto the data based UNICODE field.?
Regards
atifBased on the sun-cmp-mapping file descriptor
<schema>Rol</schema>
It is expected a file called Rol.schema is packaged with the ejb.jar. Did you perform capture-schema after you created your table? -
Hi SAP Gurus,
I maintainance LSMW with option recording
1) Recording trans action as92
2) Maintain Object Attributes with my recording
3) Maintain Source Structures
4) Maintain Source Fields
5) Maintain Structure Relations, between SAP and source structure, choose Check to examine the structure relationships for errors. Yes ok.
6) Maintain Field Mapping and Conversion Rules, and now I have trouble, because I havenu2019t screen with field. I canu2019t doing mapping.
Maybe is any buttom/function which join sap structure with my structure
Can anyone help me?
Regd
Stenwaperhaps je have to delete the recording and do a new recording.
Have you done the following steps
Maintain Object Attributes
Go to change mode
Go to recordings overview:
A recording should look like this:
AS91 asset legacy transfer
Transaction: AS91 Create Old Asset
Owner:
SAPLAIST 0105
BDC_CURSOR ANLA-BUKRS
BDC_OKCODE /00
ANLA-ANLKL ANLKL asset class
ANLA-BUKRS BUKRS company code
SAPLAIST 1000
BDC_OKCODE /00
BDC_SUBSCR SAPLAIST
ANLA-ANLN2 ANLN2 asset subnumber
BDC_SUBSCR SAPLATAB
BDC_SUBSCR SAPLATAB
BDC_SUBSCR SAPLAIST
ANLA-TXT50 TXT50 asset description line 1
ANLA-TXA50 TXA50 asset description line 2
ANLH-ANLHTXT ANLHTXT asset description line 3
Use the SAP field names like TXA50 this you have to fill in manual
When you record and when there are default values, still overwrite them manual (other wise you mis them here.
Blocks
ANLB-NDJAR(01) NDJAR_01 Usefull life in years line 01
ANLB-NDJAR(02) NDJAR_02 Usefull life in years line 02
ANLB-NDJAR(05) NDJAR_05 Usefull life in years line 05
ANLB-NDJAR(06) NDJAR_06 Usefull life in years line 06
ANLB-NDPER(01) NDPER_01 Usefull life in periode line 01
ANLB-NDPER(02) NDPER_02 Usefull life in periode line 02
ANLB-NDPER(05) NDPER_05 Usefull life in periode line 05
ANLB-NDPER(06) NDPER_06 Usefull life in periode line 06
Maintain Source Fields
ASSET_LEGACY_TRANSFER asset legacy transfer
ANLKL C(004) asset class
TXT50 C(050) asset description line 1
INVNR C(025) asset tag number
AKTIV C(010) capitalization date
GSBER C(004) business area
KOSTL C(010) cost center
GDLGRP C(008) location
AFASL-1 C(004) Asset depreciation key line 01
AFASL-2 C(004) Asset depreciation key line 02
NDJAR-1 C(003) Usefull life in years line 01
NDJAR-2 C(003) Usefull life in years line 02
NDPER-1 C(003) usefull life periods line 01
NDPER-2 C(003) usefull life periods line 02
AFABG-1 C(010) Dep start date line 01
AFABG-2 C(010) Dep start date line 02
ANBTR01-1 C(013) Aquasition value Tax books
ANBTR02-1 C(013) Aquasition value LO books
ANBTR01-6 C(013) *** depreciation TAX books
ANBTR02-6 C(013) *** depreciation LO books
These field names you (can) use in the header from the excel file, in some cases with the block you have 1 value in the sourche fields like use-full live but you fill it in in 3 depreciation area's in that case you have to map it manual
When you have used the same field name's you can use in Maintain Field Mapping and Conversion Rules the option automatecly mapping
Edited by: Paul Annotee on Apr 28, 2009 9:27 AM
Maybe you are looking for
-
Can i check or compare a column to three column?
working days,no of absent,leaves and lop are columns... below if statement is correct? if WORKING_DAYS <= IN(NO_OF_ABSENTS,LEAVES,LOP) then end if;
-
HRMD_A replicating Hr master data
Scenario - HR master data through idocs. successfully sent from H system to XI but idoc with error in SRM system(HR->XI->SRM) idocs have error status - 51 ale : inbound processing for hr master data. Some idocs have error message 'infotype 003 will b
-
Hi Guys! I am very new to JMS. I have a problem. I need to send two messages to a consumer and the consumer sends both the messages to the end-user. But my requirement is to have two minutes delay between the two messages. Due to some reason I cannot
-
Illustrator update: Error Code: U44M1I210
i am getting this error code with the log; problem updating Illustrator CC:"Error installing this update. Please quit and try again later. Error Code: U44M1I210" i have rebooted, restarted app ( no uninstalled though) In addition Create Cloud itself
-
Libraries - Creating or Swtiching between
Imagine if you had to do THIS in Microsoft Word: 1. To open a document, start Word... 2. As Word opens, press and hold the Alt key... 3. Choose the document you wish to open. 4. To create a new document, close Word and re-start it... 5. As Word opens