Conversion/Mapping from BW FI-GL Cube to Legal Application

Hi,
When I try to extract the FI-GL Actuals data from BW to BPC. I do not have the ConsGroup infoobject in BW. Do I need to create a Zinfoobject in BW and map that to Consgroup Dimension in BPC eventually. As the BW Zinfoobject have teh conversion as
None(External) to G_NONE(INternal). Is this true.
Thanks,
Vara

Hi Vara.....
you do not have to create anything in BW. If you think some constant value is flowing into BPC then you can use *STR(constant) and assign the same to the BPC dimension. this will solve your problem
for example:
CONSGROUP=*str(G_NONE)
keep this in the transformation file... this should solve your problem....
Regards,
Surya Tamada.

Similar Messages

  • Data extraction / Conversion / Mapping / Miration

    Hi All,
       Can any one please explain the meaning of the below terms and how data is moved from Non SAP system to SAP system?.
    "Data extraction / Conversion / Mapping / Miration"
    Kindly give me detailed step by step instructions on how to accomplish this. Also if there is any documentation please forward.
    I appreciate the help in advance.
    Raj

    Hi Raj,
    We can use LSMW, BDC, eCATT to upload the data from Legacy to SAP system.
    Data Extraction - Data has to extract from legacy in to some flat file like  Xls.
    Conversion - Convert as per the need of SAP.
    Mapping - Mapping the extracted field with SAP field for upload.
    Migration - Data migration from legacy to SAP using any tool like LSMW, BDC, CATT.
    Good Luck
    Om

  • Conversion mapping is losing time zone data during daylight saving time

    We have a problem with conversion of Calendars to timestamp with timezone for the last hour of Daylight Saving Time (e.g. 01:00 EDT - 01:59 EDT) where it is being interpreted as Standard Time which is in reality 60 minutes later.
    We've written a JUnit test case that runs directly against TopLink to avoid any issues with WAS and its connection pooling.
    The Calendar theDateTime comes from an object called TimeEntry which is mapped to a TIMESTAMP WITH TIMEZONE field using conversion mapping with Data Type TIMESTAMPTZ (oracle.sql) and Attribute Type Calendar (java.util).
    We are using:
    Oracle TopLink - 10g Release 3 (10.1.3.0.0) (Build Patch for Bugs 5145690 and 5156075)
    Oracle9i Enterprise Edition Release 9.2.0.7.0 - 64bit Production
    Oracle JDBC driver Version: 10.2.0.1.0
    platform=>Oracle9Platform
    Execute this Java:
    SimpleDateFormat format = new SimpleDateFormat("MM/dd/yyyy HH:mm z");
    TimeZone tzEasternRegion = TimeZone.getTimeZone("US/Eastern");
    Calendar theDateTime = Calendar.getInstance(tzEasternRegion);
    theDateTime.setTime(format.parse("10/29/2006 01:00 EDT"));
    Persist to the database and execute this SQL:
    SELECT the_date_time, EXTRACT(TIMEZONE_REGION FROM the_date_time), EXTRACT(TIMEZONE_ABBR FROM the_date_time), EXTRACT(TIMEZONE_HOUR FROM the_date_time)
    FROM time_table WHERE time_table_id=1
    This provides the following results:
    THE_DATE_TIME EXTRACT(TIMEZONE_REGION FROM the_date_time) EXTRACT(TIMEZONE_ABBR FROM the_date_time) EXTRACT(TIMEZONE_HOUR FROM the_date_time)
    29-OCT-06 01.00.00.000000 AM US/EASTERN US/Eastern EST -5
    The wrong time zone is in the database. It should be EDT -4. Let's test the SQL that should be generated by TopLink. It should look like the following.
    Execute this SQL:
    UPDATE time_table SET the_date_time = TO_TIMESTAMP_TZ('10/29/2006 01:00 US/Eastern','mm/dd/yyyy HH24:MI TZR') WHERE (time_table_id=1)
    SELECT the_date_time, EXTRACT(TIMEZONE_REGION FROM the_date_time), EXTRACT(TIMEZONE_ABBR FROM the_date_time), EXTRACT(TIMEZONE_HOUR FROM the_date_time)
    FROM time_table WHERE time_table_id=1
    This provides the same results:
    THE_DATE_TIME EXTRACT(TIMEZONE_REGION FROM the_date_time) EXTRACT(TIMEZONE_ABBR FROM the_date_time) EXTRACT(TIMEZONE_HOUR FROM the_date_time)
    29-OCT-06 01.00.00.000000 AM US/EASTERN US/Eastern EST -5
    Now, execute this SQL:
    UPDATE time_table SET the_date_time = TO_TIMESTAMP_TZ('10/29/2006 01:00 EDT US/Eastern','mm/dd/yyyy HH24:MI TZD TZR') WHERE (time_table_id=1)
    SELECT the_date_time, EXTRACT(TIMEZONE_REGION FROM the_date_time), EXTRACT(TIMEZONE_ABBR FROM the_date_time), EXTRACT(TIMEZONE_HOUR FROM the_date_time)
    FROM time_table WHERE time_table_id=1
    This provides better results:
    THE_DATE_TIME EXTRACT(TIMEZONE_REGION FROM the_date_time) EXTRACT(TIMEZONE_ABBR FROM the_date_time) EXTRACT(TIMEZONE_HOUR FROM the_date_time)
    29-OCT-06 01.00.00.000000 AM US/EASTERN US/Eastern EDT -4
    The correct time zone is now in the database. Let's test reading this with the following Java:
    System.out.println("cal= " + theDateTime);
    System.out.println("date= " + theDateTime.getTime());
    System.out.println("millis= " + theDateTime.getTimeInMillis());
    System.out.println("zone= " + theDateTime.getTimeZone());
    This provides the following results:
    cal= java.util.GregorianCalendar[...]
    date= Sun Oct 29 01:00:00 EST 2006
    millis= 1162101600000
    zone= sun.util.calendar.ZoneInfo[id="US/Eastern",...]
    The TimeZone object is correct since we are using the US/Eastern regional time zone, but the millis are wrong which makes the time EST instead of EDT. The millis should be 1162098000000.
    The conversion from java.util.Calendar to TIMESTAMPTZ loses the actual offset when setting to a regional time zone. It can maintain this info by specifying it explicitly.
    The conversion from TIMSTAMPTZ to java.util.Calendar also loses the actual offset even if the correct offset is in the database.
    Has anyone else encountered this conversion problem? It appears to be a conversion problem in both directions. I know that the Calendar is lenient by default and will assume Standard Time if time is entered during the repeated 1 o'clock hour at the end of Daylight Saving Time, but the Calendars we are using are explicit in their time, so this would be classified as data corruption.
    Nik

    Opened an SR. Looks like there is a problem with conversion either in TopLink or in JDBC.

  • How do I know the source of 0DOC_CATEG in a report since mapped from 3 diff

    Hi,
    How do I know the source of 0DOC_CATEG in a report since mapped from 3 different sources
    I have in report a 2 chars, 0DOC_CATEG and 0DOC_TYPE which are in a dimension Dim1 in the multiprovider Mult1.
    In the Identification for the multiprovider, it shows that the 2 fields come from the cube, Cube1.
    Now in the 4 update rules from the underlying 4 ODS’s to the cube1, the characteristic mappings show that 0DOC_CATEG and 0DOC_TYPE are being mapped from 3 different ODS’s(Order Header, Billing Item and Billing header) to the cube1.
    So now, if in a report I see 0DOC_CATEG and 0DOC_TYPE, how do I know whether it came from the Order Header ODS, Billing Item ODs or Billing header ODS?
    Thanks

    hi,
    it is from the same cube then you can do for analysis  by adding the request id from the datapackage dimension and chk the corressponding req id in the manage tab of you cube/monitor screen.
    Ramesh

  • Data loading from flat file to cube using bw3.5

    Hi Experts,
                       Kindly give  me the detailed steps with screens  about Data loading from flat file to cube using bw3.5
           ...............Please

    Hi ,
    Procedure
    You are in the Data Warehousing Workbench in the DataSource tree.
           1.      Select the application components in which you want to create the DataSource and choose Create DataSource.
           2.      On the next screen, enter a technical name for the DataSource, select the type of DataSource and choose Copy.
    The DataSource maintenance screen appears.
           3.      Go to the General tab page.
                                a.      Enter descriptions for the DataSource (short, medium, long).
                                b.      As required, specify whether the DataSource builds an initial non-cumulative and can return duplicate data records within a request.
                                c.      Specify whether you want to generate the PSA for the DataSource in the character format. If the PSA is not typed it is not generated in a typed structure but is generated with character-like fields of type CHAR only.
    Use this option if conversion during loading causes problems, for example, because there is no appropriate conversion routine, or if the source cannot guarantee that data is loaded with the correct data type.
    In this case, after you have activated the DataSource you can load data into the PSA and correct it there.
           4.      Go to the Extraction tab page.
                                a.      Define the delta process for the DataSource.
                                b.      Specify whether you want the DataSource to support direct access to data.
                                c.      Real-time data acquisition is not supported for data transfer from files.
                                d.      Select the adapter for the data transfer. You can load text files or binary files from your local work station or from the application server.
    Text-type files only contain characters that can be displayed and read as text. CSV and ASCII files are examples of text files. For CSV files you have to specify a character that separates the individual field values. In BI, you have to specify this separator character and an escape character which specifies this character as a component of the value if required. After specifying these characters, you have to use them in the file. ASCII files contain data in a specified length. The defined field length in the file must be the same as the assigned field in BI.
    Binary files contain data in the form of Bytes. A file of this type can contain any type of Byte value, including Bytes that cannot be displayed or read as text. In this case, the field values in the file have to be the same as the internal format of the assigned field in BI.
    Choose Properties if you want to display the general adapter properties.
                                e.      Select the path to the file that you want to load or enter the name of the file directly, for example C:/Daten/US/Kosten97.csv.
    You can also create a routine that determines the name of your file. If you do not create a routine to determine the name of the file, the system reads the file name directly from the File Name field.
                                  f.      Depending on the adapter and the file to be loaded, make further settings.
    ■       For binary files:
    Specify the character record settings for the data that you want to transfer.
    ■       Text-type files:
    Specify how many rows in your file are header rows and can therefore be ignored when the data is transferred.
    Specify the character record settings for the data that you want to transfer.
    For ASCII files:
    If you are loading data from an ASCII file, the data is requested with a fixed data record length.
    For CSV files:
    If you are loading data from an Excel CSV file, specify the data separator and the escape character.
    Specify the separator that your file uses to divide the fields in the Data Separator field.
    If the data separator character is a part of the value, the file indicates this by enclosing the value in particular start and end characters. Enter these start and end characters in the Escape Charactersfield.
    You chose the; character as the data separator. However, your file contains the value 12;45 for a field. If you set u201C as the escape character, the value in the file must be u201C12;45u201D so that 12;45 is loaded into BI. The complete value that you want to transfer has to be enclosed by the escape characters.
    If the escape characters do not enclose the value but are used within the value, the system interprets the escape characters as a normal part of the value. If you have specified u201C as the escape character, the value 12u201D45 is transferred as 12u201D45 and 12u201D45u201D is transferred as 12u201D45u201D.
    In a text editor (for example, Notepad) check the data separator and the escape character currently being used in the file. These depend on the country version of the file you used.
    Note that if you do not specify an escape character, the space character is interpreted as the escape character. We recommend that you use a different character as the escape character.
    If you select the Hex indicator, you can specify the data separator and the escape character in hexadecimal format. When you enter a character for the data separator and the escape character, these are displayed as hexadecimal code after the entries have been checked. A two character entry for a data separator or an escape sign is always interpreted as a hexadecimal entry.
                                g.      Make the settings for the number format (thousand separator and character used to represent a decimal point), as required.
                                h.      Make the settings for currency conversion, as required.
                                  i.      Make any further settings that are dependent on your selection, as required.
           5.      Go to the Proposal tab page.
    This tab page is only relevant for CSV files. For files in different formats, define the field list on the Fields tab page.
    Here you create a proposal for the field list of the DataSource based on the sample data from your CSV file.
                                a.      Specify the number of data records that you want to load and choose Upload Sample Data.
    The data is displayed in the upper area of the tab page in the format of your file.
    The system displays the proposal for the field list in the lower area of the tab page.
                                b.      In the table of proposed fields, use Copy to Field List to select the fields you want to copy to the field list of the DataSource. All fields are selected by default.
           6.      Go to the Fields tab page.
    Here you edit the fields that you transferred to the field list of the DataSource from the Proposal tab page. If you did not transfer the field list from a proposal, you can define the fields of the DataSource here.
                                a.      To define a field, choose Insert Row and specify a field name.
                                b.      Under Transfer, specify the decision-relevant DataSource fields that you want to be available for extraction and transferred to BI.
                                c.      Instead of generating a proposal for the field list, you can enter InfoObjects to define the fields of the DataSource. Under Template InfoObject, specify InfoObjects for the fields in BI. This allows you to transfer the technical properties of the InfoObjects into the DataSource field.
    Entering InfoObjects here does not equate to assigning them to DataSource fields. Assignments are made in the transformation. When you define the transformation, the system proposes the InfoObjects you entered here as InfoObjects that you might want to assign to a field.
                                d.      Change the data type of the field if required.
                                e.      Specify the key fields of the DataSource.
    These fields are generated as a secondary index in the PSA. This is important in ensuring good performance for data transfer process selections, in particular with semantic grouping.
                                  f.      Specify whether lowercase is supported.
                                g.      Specify whether the source provides the data in the internal or external format.
                                h.      If you choose the external format, ensure that the output length of the field (external length) is correct. Change the entries, as required.
                                  i.      If required, specify a conversion routine that converts data from an external format into an internal format.
                                  j.      Select the fields that you want to be able to set selection criteria for when scheduling a data request using an InfoPackage. Data for this type of field is transferred in accordance with the selection criteria specified in the InfoPackage.
                                k.      Choose the selection options (such as EQ, BT) that you want to be available for selection in the InfoPackage.
                                  l.      Under Field Type, specify whether the data to be selected is language-dependent or time-dependent, as required.
           7.      Check, save and activate the DataSource.
           8.      Go to the Preview tab page.
    If you select Read Preview Data, the number of data records you specified in your field selection is displayed in a preview.
    This function allows you to check whether the data formats and data are correct.
    For More Info:  http://help.sap.com/saphelp_nw70/helpdata/EN/43/01ed2fe3811a77e10000000a422035/content.htm

  • Delta records are not loading from DSO to info cube

    My query is about delta loading from DSO to info cube. (Filter used in selection)
    Delta records are not loading from DSO to Info cube. I have tried all options available in DTP but no luck.
    Selected "Change log" and "Get one request only" and run the DTP, but 0 records got updated in info cube
    Selected "Change log" and "Get all new data request by request", but again 0 records got updated
    Selected "Change log" and "Only get the delta once", in that case all delta records loaded to info cube as it was in DSO and  gave error message "Lock Table Overflow" .
    When I run full load using same filter, data is loading from DSO to info cube.
    Can anyone please help me on this to get delta records from DSO to info cube?
    Thanks,
    Shamma

    Data is loading in case of full load with the same filter, so I don't think filter is an issue.
    When I follow below sequence, I get lock table overflow error;
    1. Full load with active table with or without archive
    2. Then with the same setting if I run init, the final status remains yellow and when I change the status to green manually, it gives lock table overflow error.
    When I chnage the settings of DTP to init run;
    1. Select change log and get only one request, and run the init, It is successfully completed with green status
    2. But when I run the same DTP for delta records, it does not load any data.
    Please help me to resolve this issue.

  • Unable to open Information Map from planning web Help menu

    Unable to open Information Map from planning web Help menu
    The error is :
    Error 404--Not Found
    From RFC 2068 Hypertext Transfer Protocol -- HTTP/1.1:
    10.4.5 404 Not Found
    The server has not found anything matching the Request-URI. No indication is given of whether the condition is temporary or permanent.
    If the server does not wish to make this information available to the client, the status code 403 (Forbidden) can be used instead. The 410 (Gone) status code SHOULD be used if the server knows, through some internally configurable mechanism, that an old resource is permanently unavailable and has no forwarding address.

    Hi,
    It should be accessing :- http://<planning server>:8300/HyperionPlanning/help/en/hp_information_map/hp_information_map.html
    Can you access that address on the machine hosting planning?
    The file structure on the web server depending on which app server you use should be something like \Hyperion\deployments\Tomcat5\HyperionPlanning\webapps\HyperionPlanning\help\en\hp_information_map\hp_information_map.html
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • How do I remove BlackBerry Maps from the OS?

    I have a Z10 running 10.3 it's a great update and so far love the BB Assistant.  There is one big sore point for me and that is your decision to deeply integrate BB Maps with 10.3 and Assistant. 
    Appologize upfront but I have to ask....Why would you integrate a lagging-edge Mapping App with an otherwise leading-edge OS in 10.3? 
    So long story short...BB Maps doesn't work for me.  (No Walking, Mass No Transit, No Satellite or Street View equivilants.)  The bottom line is that enterprise users need more than what BB Maps offers today and for the foresable future.  Just supremely frustrated about this situation and feel bad because 10.3 is great. 
    So the question is How do I remove BB Maps from 10.3 or completely disable it. I just want it gone at this point. Please help....I'm so totally frustrated that you can't easily remove it! 
    Thank you and please let me know as soon as possible how to accomplish this. 
    Solved!
    Go to Solution.

    Good news...Just to let you know I did find a way to remove BB Maps. 
    Even through  it's a core app  by reverting to a previous version  of BB Maps I was able to delete it from the OS.  iBest part is there was no impact to the OS and everything else works as normal.  Please Please ...if your going to make BB Maps a core app please make sure its a world class app and it works for all Enterprise users.  (Not just for drivers)  
    I can truely say my BB OS 10 experence is much better now without BB Maps.  If BB isn't going to invest in Maps then please don't make it a core app and tightly integrated into the OS.  I think at this point BB would be better off outsouring BB Maps to one of the leading Map providers such as Nokia, Mapquest, Bing/MS or even dare I say Google Maps.  I think the BB community would be much happier with any of the first three. 
    Thanks

  • How to remove a font file and its mapping from XML Publisher Administrator

    Hello,
    I am trying to remove a font file and its mapping from the XML Publisher Administrator. If this is not possible, is there a way to disable it?
    The reason why I am asking this is because we just got a new check printer, and I was able to print just fine. However, the MICR font was not coming up properly on the check, so I went ahead and configured the font with the XML Publisher Administrator. Now my check output is coming up blank after running the XML report publisher program. The only change I did was to upload the MICR font and mapped it.
    My template and rdf report file are working fine in another environment we have, so I know that's not the problem.
    Any help is appreciate it.
    Thanks.

    This is how you do it
    Font
    DELETE FROM xdo_lobs
    where lob_code like '<Font_name>';
    Font Mapping Set
    DELETE FROM xdo_font_mapping_sets_TL
    WHERE mapping_code like '<MAPPING_CODE>'

  • How to load data from a ODS to CUBE Request ID - by - Request ID?

    <i>How to load data from a ODS to CUBE Request ID - by - Request ID?</i>
    The problem is that... some requests had been eliminated of the cube and the delta control between the ODS and CUBE was lost. The flag "<b>data mart status of request</b>" of all the requests of the ODS had been blank.
    Now it is necessary to load some requests from the ODS for the cube.
    Notes:
    - it is not possible to make a complete load selecting the data to be loaded;
    - the PSA is not being used;
    - considering the data volume it is impracticable to reload the cube completely.
    Thanks in advance,
    Wesley.

    Dear R B,
    Considering the following:
    -> the delta control was lost;
    -> the data already are active in the ODS;
    -> part of the data of the ODS already is in the cube.
    The indicated procedure it only guarantees the load of the data that are in the ODS and that are not in the cube.
    Tks,
    Wesley.

  • Invoking a mapping from another DB in a process flow

    Hi experts, have anyone try to invoke/execute an OWB mapping from another database in a process flow? Is it possible to do that?
    My process flow (in the Staging db) scenario is to execute several OWB mapping in the Staging db, then the final step is to execute an OWB mapping which is created in the warehouse db.
    Is that possible?
    Edited by: wwardana on Apr 5, 2009 9:09 PM

    Look at this thread
    [Calling WB_RT_API_EXEC.RUN_TASK over database link|http://forums.oracle.com/forums/thread.jspa?threadID=775938]
    Regards,
    Oleg

  • Mapping from nested to one table (sales order) XML to IDOC

    Hello,
    I have to map a xml file to the IDOC SALESORDER_CREATEFROMDAT2.SALESORDER_CREATEFROMDAT202.
    How I can map the the longtext from the XML file to the IDOC struktur.
    Part of the XML file, there could be n times BPosition with n times longtext. The longtext must be map with a table. I some case its a mapping from nested to mornal table.
        <BPosition>
             <lpos>1</lpos>
             <bbl_sap_nr/>
             <milvonr/>
             <kurztitel/>
             <anzbest/>
             <anzliefer/>
             <kostenpflichtig/>
             <longtext>
                <line>pos1 zeile1</line>
                <line>pos1 zeile 2</line>
             </longtext>
          </BPosition>
    thanks for your help.

    Hi, I have to map this 1 XML to 1 IDOC
    XML:
    <?xml version="1.0" encoding="UTF-8"?>
    <ns0:MT_Milver xmlns:ns0="http://ccssap.bfi.admin.ch/milver">
       <bestellung>
          <besteller>
             <bestellnr/>
             <auftragdefit/>
             <wempfdebit/>
             <bestelldat/>
             <lieferdat/>
             <anzpos/>
             <language/>
             <adrzeile1/>
             <adrzeile2/>
             <adrzeile3/>
             <adrzeile4/>
          </besteller>
          <Kopf>
             <Lkopf>Kopf1 zeile1</Lkopf>
             <Lkopf>Kopf1 zeile2</Lkopf>
          </Kopf>
          <BPosition>
             <lpos>1</lpos>
             <matnr/>
             <milvonr/>
             <kurztitel/>
             <anzbest/>
             <anzliefer/>
             <kostenpflichtig/>
             <bemerkung>
                <line>pos1 zeile1</line>
                <line>pos1 zeile 2</line>
             </bemerkung>
          </BPosition>
          <BPosition>
             <lpos>2</lpos>
             <matnr/>
             <milvonr/>
             <kurztitel/>
             <anzbest/>
             <anzliefer/>
             <kostenpflichtig/>
             <bemerkung>
                <line>pos2 zeile1</line>
                <line>pos2 zeile 2</line>
             </bemerkung>
          </BPosition>
       </bestellung>
    </ns0:MT_Milver>
    IDOC:
    The Idoc have a segment for the longtext (table).I have to map, lpos, line into  E1BPSDTEXT from SALESORDER_CREATEFROMDAT2. Its now more clear?

  • Problem by transporting the message mapping from PI 7.0 to PI 7.1

    Hi Everyone,
    by transporting the message mapping from PI 7.0 to PI 7.1 i have got the following problem.
    "Source code has syntax error:
    K:\usr\sap\E71\DVEBMGS00\j2ee\cluster\serve......
    package udfpool does not exist
    import udfpool.*;
    i have used a UDF in my message mapping.
    This UDF calls for a method of a java class, which is imported as a .jar file with this message mapping.
    this message mapping works very well on PI 7.0,
    but doesn't work any more on PI 7.1 because of the problem described above.
    anyone can help me out of this problem?
    thanks!
    zc

    hi,
    but did you import the jar archive to 7.1 first?
    Regards,
    Michal Krawczyk
    http://mypigenie.com XI/PI FAQ

  • Upload infomation about mapping from say BW/BI system

    Hello Gurus,
             how can I upload infomation about mapping from say BW/BI system ? I want to use this uploaded mapping information to update my mapping sheet document.
    Many thanks

    Hi,
    Double click on Transformation Mapping -> Go to Extras -> Tabular Overview-> Right Click - > Export to Microsoft Excel.
    You can also save the transormation as JPG file (Click on Button -> Save as JPG -> from left ) and attach that in the mapping document.
    Hope this help!

  • Need help with Resource Mapping from Application Deployment to VC:virtualMachine

    Hi,
    I've built a number of vCO Workflows and hooked them up to Resource Actions in vRA. However, the Workflows I've built all take a VC:VirtualMachine as the input and therefore,they only "hookup" to VMs in the Machines list in the Items tab in vRA. Ideally, I want these actions to hookup to the Application Deployments in vRA since that is what my users are deploying - the VM that the App rides on is somewhat secondary. Plus, it's cumbersome for them to find the VM that corresponds to the App they want to run the action on.
    So, I looked into creating a new Resource Mapping, thinking I could map an Application Deployment to a VC:VirtualMachine so that I could create an Action for the Application. I was able to create a mapping from Application Deployment to VC:VM using the Map to VC:VM existing workflow and I could add that to the Actions menu for the Application Deployment but if I try to run it, it fails. I kind of expected this since I think the Map to VC:VM is expecting an input of IaaS VC:VM to map to vCO VC:VM. I think what I need to do is write a Workflow that will take in the Application Deployment "object" and find the VC:VM inside it but I don't know how to find out the structure of an Application Deployment such that I could parse it and get the VM.
    Does this make sense? Am I going about this the right way? Thanks for any help,
    Tom

    Hi,
    I've built a number of vCO Workflows and hooked them up to Resource Actions in vRA. However, the Workflows I've built all take a VC:VirtualMachine as the input and therefore,they only "hookup" to VMs in the Machines list in the Items tab in vRA. Ideally, I want these actions to hookup to the Application Deployments in vRA since that is what my users are deploying - the VM that the App rides on is somewhat secondary. Plus, it's cumbersome for them to find the VM that corresponds to the App they want to run the action on.
    So, I looked into creating a new Resource Mapping, thinking I could map an Application Deployment to a VC:VirtualMachine so that I could create an Action for the Application. I was able to create a mapping from Application Deployment to VC:VM using the Map to VC:VM existing workflow and I could add that to the Actions menu for the Application Deployment but if I try to run it, it fails. I kind of expected this since I think the Map to VC:VM is expecting an input of IaaS VC:VM to map to vCO VC:VM. I think what I need to do is write a Workflow that will take in the Application Deployment "object" and find the VC:VM inside it but I don't know how to find out the structure of an Application Deployment such that I could parse it and get the VM.
    Does this make sense? Am I going about this the right way? Thanks for any help,
    Tom

Maybe you are looking for