Clarification on 'BAPI_BUS2054_CREATE_MULTI'

Hi Frnds,
  I need some clarification in         Bapi   'BAPI_BUS2054_CREATE_MULTI' :
   1. Whether BAPI_PROJECT_MAINTAIN can be used to create a WBS Elemnt in CJ02 with Custom fields Value ?If Yes, then How can we do?
   2. I know 'BAPI_BUS2054_CREATE_MULTI'  can be used to change the WBS Element.My Doubt is whether we can create a Project and WBS Element with this Bapi? .If Yes, then How can we do?.
   3. I want to upload the Custom field in Transaction CJ02.? How Can I do. Give me an Example?
  4. I heard that, to use the    BAPI 'BAPI_BUS2054_CREATE_MULTI'  SAP Note 637345 should be apllied. Whether this is true ?
    Please clarify my Doubts .
Thanks in Advance.

Anand,
FU BAPI_BUS2054_CREATE_MULTI
Short Text
Create WBS Elements Using BAPI
Functionality
WBS elements can be created for a project with BAPI "BAPI_BUS2054_CREATE_MULTI". To do this, parameter "I_PROJECT_DEFINITION" must contain the project definition for which the WBS elements are to be created. The individual WBS elements with all required values must be entered in table "IT_WBS_ELEMENT_TABLE".
The WBS elements are created next to each other, in the same sequence as in table "IT_WBS_ELEMENT_TABLE". A WBS element under which the new WBS elements are to be created can be specified in parameter "I_WBS_UP". A WBS element that is to be located directly to the left of the new WBS elements can be specified with parameter "I_WBS_LEFT". If "I_WBS_LEFT" is not specified, the new WBS elements are added on the left-hand side. If I_WBS_UP is also not specified, the new WBS elements are added on the left and on the first level.
Before anything is created, the following is checked:
Is another project already being processed in the LUW (Logical Unit of Work)?
Can the project be locked?
If one check was not successful, nothing is created. Otherwise, each WBS element is changed individually in "IT_WBS_ELEMENT_TABLE", although the following is checked first:
Is the data consistent?
If all checks were succussful, the individual WBS element is created in the document tables. Afterwards, the hierarchy is updated, that is the new elements are added in the appropriate place as described above. If an error occurs while this is being carried out, the new elements are created on the right-hand side, on the first level, and an error message is generated in the return table. An error can occur if the WBS element in I_WBS_UP and I_WBS_LEFT does not exist in the specified project, or I_WBS_UP is not directly above I_WBS_LEFT if both are specified, or because an inconsistency occurs in the hierarchy for another reason.
As soon as a LUW (Logical Unit of Work) is completed with BAPI BAPI_PS_PRECOMMIT and COMMIT WORK, the WBS elements are finally changed.
Only one project or WBS element from a project can be processed at a time in a LUW.
The return parameter RETURN displays first an error or success message that shows whether the WBS elements could be created. The first message variable contains the object type, the second contains the object ID, and the fourth contains the GUID (if it could be read). All related messages that were generated during processing are listed underneath the error or success messages. The parameters of the individual messages are filled with the object ID.
Notes
1. Definition "Processing Unit"
In the following, the term "processing unit" refers to a series of related processing steps.
The first step in a processing unit is initialization, which is done by calling the BAPI BAPI_PS_INITIALIZATION.
Afterwards, the individual BAPIs listed below can be used several times, if required.
The processing unit ends when the final precommit (call BAPI BAPI_PS_PRECOMMIT) is executed with a subsequent COMMIT WORK (for example, the statement COMMIT WORK, the BAPI "BAPI_TRANSACTION_COMMIT" or the BapiService.TransactionCommit method).
After the final COMMIT WORK, the next initialization opens a new processing unit via the BAPI "BAPI_PS_INITIALIZATION".
In principal, the following applies to each individual processing unit.
2. Creation of a Processing Unit
Each processing unit must be initialized by calling the BAPI "BAPI_PS_INITIALIZATION" once.
Afterwards, the following individual BAPIs can be used within a processing unit - they can also be used more than once, taking into account the "One-Project-Principle" explained below. This also means that an object created in the current processing unit by a CREATE-BAPI can be changed by a CHANGE-BAPI or STATUS-BAPI.
Except for the BAPIs explicitly named below, you can only call up BAPIs that execute GET methods or READ methods only. In particular, the BAPIs for confirming a network may not be used with the individual BAPIs named below!
Business Object ProjectDefinitionPI
BAPI Method
BAPI_BUS2001_CREATE ProjectDefinitionPI.CreateSingle
BAPI_BUS2001_CHANGE ProjectDefinitionPI.Change
BAPI_BUS2001_DELETE ProjectDefinitionPI.Delete
BAPI_BUS2001_SET_STATUS ProjectDefinitionPI.SetStatus
BAPI_BUS2001_PARTNER_CREATE_M ProjectDefinitionPI.PartnerCreateMultiple
BAPI_BUS2001_PARTNER_CHANGE_M ProjectDefinitionPI.PartnerChangeMultiple
BAPI_BUS2001_PARTNER_REMOVE_M ProjectDefinitionPI.PartnerRemoveMultiple
Business Object WBSPI
BAPI Method
BAPI_BUS2054_CREATE_MULTI WBSPI.CreateMultiple
BAPI_BUS2054_CHANGE_MULTI WBSPI.ChangeMultiple
BAPI_BUS2054_DELETE_MULTI WBSPI.DeleteMultiple
BAPI_BUS2001_SET_STATUS WBSPI.SetStatus
Business Object NetworkPI
BAPI Method
BAPI_BUS2002_CREATE NetworkPI.CreateFromData
BAPI_BUS2002_CHANGE NetworkPI.Change
BAPI_BUS2002_DELETE NetworkPI.Delete
BAPI_BUS2002_ACT_CREATE_MULTI NetworkPI.ActCreateMultiple
BAPI_BUS2002_ACT_CHANGE_MULTI NetworkPI.ActChangeMultiple
BAPI_BUS2002_ACT_DELETE_MULTI NetworkPI.ActDeleteMultiple
BAPI_BUS2002_ACTELEM_CREATE_M NetworkPI.ActElemCreateMultiple
BAPI_BUS2002_ACTELEM_CHANGE_M NetworkPI.ActElemChangeMultiple
BAPI_BUS2002_ACTELEM_DELETE_M NetworkPI.ActElemDeleteMultiple
BAPI_BUS2002_SET_STATUS NetworkPI.SetStatus
The processing unit must be finished by calling the BAPIs BAPI_PS_PRECOMMIT and BAPI_TRANSACTION_COMMIT (in that order).
3. One-Project Principle
For technical reasons, only the project definition and the WBS elements of one project can be processed in a processing unit.
More than one project is used, for example, if
You create or change more than one project
You have changed a project and want to change a network to which WBS elements from a different project are assigned
You want to change various networks to which WBS elements from different projects are assigned
You create or change a WBS assignment in a network so that a WBS element from a second project is used
WBS elements from different projects are already assigned to a network (note: this type of network cannot be processed with the network BAPIs named above).
If you define a report for calling BAPIs, this means that:
The report may use a maximum of one project per processing unit. The individual BAPI calls must be distributed between more than one processing unit, which use a maximum of one project per processing unit.
4. All-Or-Nothing Principle
If an error occurs in a processing unit in an individual BAPI or in the BAPI "BAPI_PS_PRECOMMIT" (that is, the return table ET_RETURN contains at least one message of the type "E" (error), "A" (abnormal end) or "X" (exit), posting is not possible.
If an error occurs in an individual BAPI and despite this you call the BAPI "BAPI_PS_PRECOMMIT", message CNIF_PI 056 is issued with message type I (information).
If an error occurs in an individual BAPI or in the BAPI "BAPI_PS_PRECOMMIT", but despite this you execute a COMMIT WORK, the program that is currently in process is terminated and message CNIF_PI 056 is issued with message type X.
This is to ensure data consistency for all objects created, changed, and/or deleted in the processing unit.
Note that the processing unit to which this happens can no longer be successfully closed and therefore, no new processing unit can be started.
However, you can set the current processing unit back to an initialized status by using a rollback work (for example, statement ROLLBACK WORK, the BAPI "BAPI_TRANSACTION_ROLLBACK" or the method BapiService.TransactionRollback). Technically speaking, this means that the previous LUW is terminated and a new LUW is started in the current processing unit.
Note that in this case, the current processing unit does not have to be re-initialized.
Also note that the rollback also takes place according to the "all-or-nothing" principle, that therefore all individual BAPIs carried out up to the rollback are discarded. After a rollback, you can, therefore, no longer refer to an object that was previously created in the current processing unit using a CREATE-BAPI.
However, you can close the processing unit again after a rollback, using a PRECOMMIT and COMMIT WORK, as long as all individual BAPIs, and the precommit carried out after the rollback, finish without errors.
You can carry out several rollbacks in a processing unit (technically: start a new LUW several times).
5. Procedure in the Case of Errors
As soon as an error occurs in an individual BAPI or in the BAPI "BAPI_PS_PRECOMMIT", you have the following options:
Exit the report or the program that calls the BAPIs, the PRECOMMIT and the COMMIT WORK.
Execute a rollback in the current processing unit.
6. Rules for Posting
After you have successfully called the individual BAPIs of a processing unit, you must call the PRECOMMIT "BAPI_PS_PRECOMMIT".
If the PRECOMMIT is also successful, the COMMIT WORK must take place directly afterwards.
In particular, note that after the PRECOMMIT, you cannot call other individual BAPIs again in the current processing unit.
It is also not permitted to call the PRECOMMIT more than once in a processing unit.
7. Recommendation "COMMIT WORK AND WAIT"
If an object created in a processing unit is to be used in a subsequent processing unit (for example, as an account assignment object in a G/L account posting) it is recommended to call the commit work with the supplement "AND WAIT" or to set the parameters for the BAPI "BAPI_TRANSACTION_COMMIT" accordingly.
8. Field Selection
The field selection is a tool for influencing the user interface (that is, for the dialog). In the BAPIs, the settings from the field selection (for example, fields that are not ready for input or required-entry) are not taken into account.
9. Using a date in the BAPI interface
The BAPI must be provided with the date in the internal format YYYYMMDD (year month day). No special characters may be used.
As a BAPI must work independent of user, the date cannot and should not be converted to the date format specified in the user-specific settings.
10. Customer Enhancements of the BAPIs
For the BAPIs used to create and change project definitions, WBS elements, networks, activities, and activity elements, you can automatically fill the fields of the tables PROJ, PRPS, AUFK, and AFVU that have been defined for customer enhancements in the standard system.
For this purpose, help structures that contain the respective key fields, as well as the CI include of the table are supplied. The BAPIs contain the parameter ExtensionIN in which the enhancement fields can be entered and also provide BAdIs in which the entered values can be checked and, if required, processed further.
CI Include Help Structure   Key
CI_PROJ BAPI_TE_PROJECT_DEFINITION   PROJECT_DEFINITION
CI_PRPS BAPI_TE_WBS_ELEMENT   WBS_ELEMENT
CI_AUFK BAPI_TE_NETWORK   NETWORK
CI_AFVU BAPI_TE_NETWORK_ACTIVITY   NETWORK ACTIVITY
CI_AFVU BAPI_TE_NETWORK_ACT_ELEMENT   NETWORK ACTIVITY ELEMENT
Procedure for Filling Standard Enhancements
Before you call the BAPI for each object that is to be created or changed, for which you want to enter customer-specific table enhancement fields, add a data record to the container ExtensionIn:
STRUCTURE:    Name of the corresponding help structure
VALUEPART1:   Key of the object + start of the data part
VALUEPART2-4: If required, the continuation of the data part
VALUPART1 to VALUPART4 are therefore filled consecutively, first with the keys that identify the table rows and then with the values of the customer-specific fields. By structuring the container in this way, it is possible to transfer its content with one MOVE command to the structure of the BAPI table extension.
Note that when objects are changed, all fields of the enhancements are overwritten (as opposed to the standard fields, where only those fields for which the respective update indicator is set are changed). Therefore, even if you only want to change one field, all the fields that you transfer in ExtensionIn must be filled.
Checks and Further Processing
Using the methods ...CREATE_EXIT1 or. ...CHANGE_EXIT1 of the BAdI BAPIEXT_BUS2001, BAPIEXT_BUS2002, and BAPIEXT_BUS2054, you can check the entered values (and/or carry out other checks).
In the BAdI's second method, you can program that the data transferred to the BAPI is processed further (if you only want to transfer the fields of the CI includes, no more action is required here).
For more information, refer to the SAP Library under Cross-Application Components -> Business Framework Architecture -> Enhancements, Modifications ... -> Customer Enhancement and Modification of BAPIs -> Customer Enhancement of BAPIs (CA-BFA).
Further information
For more information, refer to the SAP Library under Project System -> Structures -> Project System Interfaces -> PS-EPS Interface to External Project Management Systems.
Parameters
I_PROJECT_DEFINITION
IT_WBS_ELEMENT
ET_RETURN
EXTENSIONIN
EXTENSIONOUT
Exceptions
Function Group
CJ2054

Similar Messages

  • Domain Guideance and Clarification using SVN and an Export suggestion

    Hello Oracle SQL Data Modeler Support,
    Apologies if this has been documented somehwere and I have missed reading it, but have gone through the User Guide and cannot find the clarification I want regarding domains.
    1) WHAT IS BEST PRACTICE TO SAVE WHEN USING SVN
    From the forum I have picked up that the domains file is in the following directory:
    ~\datamodeler\datamodeler\types
    File name is 'defaultdomains.xml'
    When I come to save the file using SVN I get 'Choose versioned folder for storing system types'
    I assume this is where the domains file is stored.
    I require the Domains to be avialable centrally to all Designs I create, what should I do?
    a) Set the folder to ~\datamodeler\datamodeler\types
    b) Create a design called 'Domains' and store it in this folder
    c) Any thing you may suggest
    2) EXPORT OF DOMAIN FILE SUGGESTION
    This should be a quick win for you, can you please add an Export Domains function, seems this needs to do no more than make a copy of the defaultdomains.xml file and create it in a specified export directory.
    Will avoid having to go through the forum to pick up that the defaultdomains.xml file needs to be copied and transfered over for new SQL Data Modeler installations.

    Hello,
    I require the Domains to be avialable centrally to all Designs I create, what should I do?Default location is fine if SVN is not used and if all designs are used only on that computer.
    If versioning is used then it's better to have separate directory for domains and this directory shouldn't be part of any design's directory - i.e. for designs you can have directories c:\des_1, c:\des_2 ...c:\des_n - one directory per each design and that directory will contain design DMD file and design folder. For domains you can have directory c:\DM_Sys_types and you need to set this directory in "Tools>Preferences>Data Modeler>system types directory" - logical types, RDBMS sites and scripts also will be stored there.
    Philip

  • BI Java Installation: Clarification Needed!

    I'm wondering if someone could help clear this up for me:
    We have installed NW2004s SPS08 on Solaris with only the usage type EP (AS Java/EP). Awhile back I was tasked with connecting our BW ABAP system to our Portal ("Integration into the Portal" - transaction SPRO etc).
    After starting this task I noticed I was missing things that the instructions were telling me to configure; i.e. Items that were related to BI that weren't in the Portal such as certain roles, the BI Repository Manager etc.
    Reading around, it seemed like I need to install the BI usage type.
    I have now been tasked with another installation of the Portal (NW2004s SR1 this time), but am trying to head off the problems I'm experiencing trying to connect the Portal to our BW ABAP system. This will be a Java only installation. I've read that BI-Java requires EP and AS Java, and that if I install the BI-Java usage type, EP and AS-Java will be installed automatically.
    My question is, if I do the BI-Java installation and it automatically installs EP/AS-Java, will the Portal still act the same way as it does in my EP/AS-Java only installation I already have? We have many plans to use the Portal as an entry point for all of our backend systems, so if the Portal's capabilities are not what we see already (in our first installation of just EP/AS-Java) then we will have problems.
    Any clarification is greatly appreciated and I will award points accordingly.
    THANKS!
    Beau.

    I think that's what I needed to know.
    I was actually wondering about DI as well. Our developers are having problems deploying .EAR files to our original EP-only install. They can deploy .PAR files with no problems, but .EAR files always error out. Maybe having DI will solve this problem as well? I'm a little concerned about the hardware capacity of this box with having BI, EP and DI all installed on it. I had contacted SAP about installing DI a while back and basically they had told me to install it on a seperate server, by itself. We're running a Sun Enterprise 420R, 4G of memory and a 450 mhz processor for this new installation. Do you think this box is capable of handle EP, BI and DI (AS-JAVA)?
    Thanks for your help!

  • Clarifications in Asset Accounting

    Dear Experts,
    Please clarify below questions.
    1) What is the difference between Depreciation Area and Depreciation Key?
    2) What is the importance of Recalculate value button in Asset Accounting?
    3) Suppose I have 1000 assets, if I want to run depreciation only for 200 assets how can I do that?
    4) If suppose I have 5 Depreciation areas, I am able to see the book depreciation values only, where I
        can see the other depreciation values? If we can’t see for which purpose we are using other
        depreciation areas?
    5) Vendor and Customer balances get update regularly or once in a month or year?
    6) Where we have to create number ranges either in Production or Development Environment?
    7) How can we transfer GLs from one environment to another?
    Full points will be assigned as way of thanks
    Regards,
    Vineela

    Hi Krishna,
    Thanks for your reply,But still I need some more clarifications please respond........
    2) What is the importance of Recalculate value button in Asset Accounting?-
    (A)recalculates depr when asset parameters are changed
    Where it will be there as it(recalculate button) is not there in AFAB
    3) Suppose I have 1000 assets, if I want to run depreciation only for 200 assets how can I do that?
    (A)select those 200 and run depreciation
    Here Assets Selection option is there only in test run not there in update run.
    4) If suppose I have 5 Depreciation areas, I am able to see the book depreciation values only, where I can see the other depreciation values? If we can’t see for which purpose we are using other depreciation areas? (A)use AW01N- you cans ee all dep areas
    In AW01N only book depreciation values is displayed,how can I see other depreciation area values
    Regards
    Vineela
    Edited by: Vineela Siri on Apr 9, 2008 7:22 AM

  • I need a clarification : Can I use EJBs instead of helper classes for better performance and less network traffic?

    My application was designed based on MVC Architecture. But I made some changes to HMV base on my requirements. Servlet invoke helper classes, helper class uses EJBs to communicate with the database. Jsps also uses EJBs to backtrack the results.
    I have two EJBs(Stateless), one Servlet, nearly 70 helperclasses, and nearly 800 jsps. Servlet acts as Controler and all database transactions done through EJBs only. Helper classes are having business logic. Based on the request relevant helper classed is invoked by the Servlet, and all database transactions are done through EJBs. Session scope is 'Page' only.
    Now I am planning to use EJBs(for business logic) instead on Helper Classes. But before going to do that I need some clarification regarding Network traffic and for better usage of Container resources.
    Please suggest me which method (is Helper classes or Using EJBs) is perferable
    1) to get better performance and.
    2) for less network traffic
    3) for better container resource utilization
    I thought if I use EJBs, then the network traffic will increase. Because every time it make a remote call to EJBs.
    Please give detailed explanation.
    thank you,
    sudheer

    <i>Please suggest me which method (is Helper classes or Using EJBs) is perferable :
    1) to get better performance</i>
    EJB's have quite a lot of overhead associated with them to support transactions and remoteability. A non-EJB helper class will almost always outperform an EJB. Often considerably. If you plan on making your 70 helper classes EJB's you should expect to see a dramatic decrease in maximum throughput.
    <i>2) for less network traffic</i>
    There should be no difference. Both architectures will probably make the exact same JDBC calls from the RDBMS's perspective. And since the EJB's and JSP's are co-located there won't be any other additional overhead there either. (You are co-locating your JSP's and EJB's, aren't you?)
    <i>3) for better container resource utilization</i>
    Again, the EJB version will consume a lot more container resources.

  • Need some clarification on Replacement Path with Variable

    Hello Experts,
    Need some clarification on Replacement Path with Variable.
    We have 2 options with replacement path for characteristic variables i.e.
    1) Replace with query
    2) Replace with variable.
    Now, when we use  "Replace with variable" we give the variable name. Then we get a list for "Replace with" as follows:
    1) Key
    2) External Characteristic Value Key
    3)Label
    4)Attribute value.
    I need detailed explanation for the above mentioned 4 options with scenarios.
    Thanks in advance.
    Regards
    Lavanya

    Hi Lavanya,
    Please go through the below link.
    http://help.sap.com/saphelp_nw70/helpdata/EN/a4/1be541f321c717e10000000a155106/frameset.htm
    Hope this gives you complete and detailed explaination.
    Regards,
    Reddy

  • Error while Creating WBS element using BAPI 'BAPI_BUS2054_CREATE_MULTI'

    Hi Expert,
    I've a requirement to create WBS elements using BAPI. And I am using BAPIs in the following manner.
    CALL FUNCTION 'BAPI_PS_INITIALIZATION'
      CALL FUNCTION 'BAPI_BUS2054_CREATE_MULTI'
        EXPORTING
          i_project_definition = g_pdwbs
      TABLES
        it_wbs_element             = it_wbs_element
       et_return                  = it_return
      EXTENSIONIN                =
      EXTENSIONOUT               =
        CALL FUNCTION 'BAPI_PS_PRECOMMIT'
          CALL FUNCTION 'BAPI_TRANSACTION_COMMIT'.
    When I do so I am getting the below errors. Please suggest.
    "Individual check for creating the object WBS Element C-497082 required ".
    "Individual check for creating the object WBS Element C-497082-0001 required".
    Please suggest how to correct this error. 
    <REMOVED BY MODERATOR - REQUEST OR OFFER POINTS ARE FORBIDDEN>
    Edited by: Alvaro Tejada Galindo on Aug 18, 2008 5:25 PM

    I am getting the same error, but I cannot figure it out how to detect it and fix it.  Will you please share your solution?
    Thanks,

  • Rebate clarification

    Hi all
    I need some clarification on Reabte.Kindly help me.I just want to know wheather the rebate will be considered for Free goods also?that is whether the volume of the goods supplied as free of cost also considered for rebate calculation?As per my understanding that should not be considered but still i need clarification on this.Kindly help me.

    Hi,
    Please follow the below link that help u ::
    http://help.sap.com/saphelp_46c/helpdata/en/5d/363eb7583f11d2a5b70060087d1f3b/content.htm
    http://www.erpgenie.com/publications/saptips/052005.pdf
    REgards,
    Krishna O

  • Issue creating WBS using BAPI_BUS2054_CREATE_MULTI

    Hi All,
    I am trying to create WBS elements under existing project (CJ20N) using “BAPI_BUS2054_CREATE_MULTI”. I am I am getting “ET_RETURN” with message type ‘S-W-I’, even after commit no WBS created. Type ‘S’ message says “Individual check for creating the object WBS Element XXX required”. Please find my code for more info. I am I using correct BAPI? Any hint or suggestion welcome. Thanks in advance.
    Regards,
    Trim
    LOOP AT gt_wbs_element .
        MOVE-CORRESPONDING gt_wbs_element to gt_wbs_element_bapi.
        MOVE : gt_wbs_element-mims_id    to gt_wbs_element_bapi-USER_FIELD_CHAR20_1.
        MOVE : gt_wbs_element-PROJECT_DEFINITION to gv_proj.
    *** Conert all dates
        split_date gt_wbs_element-WBS_BASIC_START_DATE gt_wbs_element_bapi-WBS_BASIC_START_DATE.
        split_date gt_wbs_element-WBS_BASIC_FINISH_DATE gt_wbs_element_bapi-WBS_BASIC_FINISH_DATE.
        split_date gt_wbs_element-WBS_FORECAST_START_DATE gt_wbs_element_bapi-WBS_FORECAST_START_DATE.
        split_date gt_wbs_element-WBS_FORECAST_FINISH_DATE gt_wbs_element_bapi-WBS_FORECAST_FINISH_DATE.
    *   split_date gt_wbs_element-WBS_ACTUAL_START_DATE gt_wbs_element_bapi-WBS_ACTUAL_START_DATE.
    *   split_date gt_wbs_element-WBS_ACTUAL_FINISH_DATE gt_wbs_element_bapi-WBS_ACTUAL_FINISH_DATE.
        APPEND gt_wbs_element_bapi.
    ****   Update Custome 'Z' Fields
        clear : BAPI_TE_WBS_ELEMENT, gv_error.
        BAPI_TE_WBS_ELEMENT-WBS_ELEMENT    = gt_wbs_element-WBS_ELEMENT.
        BAPI_TE_WBS_ELEMENT-ZZCP_APPRBUD   = gt_wbs_element-ZZCP_APPRBUD.
        BAPI_TE_WBS_ELEMENT-ZZCP_ELECT     = gt_wbs_element-ZZCP_ELECT.
        BAPI_TE_WBS_ELEMENT-ZZCP_AREA      = gt_wbs_element-ZZCP_AREA.
        BAPI_TE_WBS_ELEMENT-ZZCP_PROG      = gt_wbs_element-ZZCP_PROG.
        BAPI_TE_WBS_ELEMENT-ZZCP_SUBPR     = gt_wbs_element-ZZCP_SUBPR.
    **    BAPI_TE_WBS_ELEMENT-ZZCP_FINALDAT  = gt_wbs_element-ZZCP_FINALDAT.
        BAPI_TE_WBS_ELEMENT-ZZCP_TOTBUD    = gt_wbs_element-ZZCP_TOTBUD.
    **    BAPI_TE_WBS_ELEMENT-ZZCP_DADHC_REG = gt_wbs_element-ZZCP_DADHC_REG.
    **    BAPI_TE_WBS_ELEMENT-ZZCP_DADHC_CEP = gt_wbs_element-ZZCP_DADHC_CEP.
        BAPI_TE_WBS_ELEMENT-ZZCP_PREDBUD   = gt_wbs_element-ZZCP_PREDBUD.
        BAPI_TE_WBS_ELEMENT-ZZCP_CLIENT    = gt_wbs_element-ZZCP_CLIENT.
        BAPI_TE_WBS_ELEMENT-ZZCP_PM_NAME   = gt_wbs_element-ZZCP_PM_NAME.
        GT_EXTENSION_IN-STRUCTURE = 'BAPI_TE_WBS_ELEMENT'.
        GT_EXTENSION_IN-VALUEPART1 = BAPI_TE_WBS_ELEMENT+0(199).
        GT_EXTENSION_IN-VALUEPART2 = BAPI_TE_WBS_ELEMENT+199(171).
        APPEND GT_EXTENSION_IN.
        MOVE-CORRESPONDING gt_wbs_element to wa_wbs_elem.
        AT END OF PROJECT_DEFINITION.
    *** Inalise BAPI
          CALL FUNCTION 'BAPI_PS_INITIALIZATION'.
          clear gt_return[].
          CALL FUNCTION 'BAPI_BUS2054_CREATE_MULTI'
            EXPORTING
              I_PROJECT_DEFINITION       = gv_proj
            TABLES
              IT_WBS_ELEMENT             = gt_wbs_element_bapi
              ET_RETURN                  = gt_return
              EXTENSIONIN                = gt_extension_in
    *         EXTENSIONOUT               =
            EXCEPTIONS
              error_message              = 1
              others                     = 2.
    *** Check GT_RESULT for success
          LOOP AT gt_return where type co 'EA'.
            gv_error = 'X'.
          ENDLOOP.
          IF gv_error is initial.
            CALL FUNCTION 'BAPI_PS_PRECOMMIT'
              TABLES
                ET_RETURN = gt_return_pre.
            LOOP AT gt_return_pre where type co 'EA'.
              gv_error = 'X'.
            ENDLOOP.
          ENDIF.
          IF gv_error is INITIAL.
            CALL FUNCTION 'BAPI_TRANSACTION_COMMIT'
              EXPORTING
                WAIT   = '1'
              IMPORTING
                RETURN = GT_RETURN_COMMIT.
            clear gv_write.
            format color 4.
            CONCATENATE 'Success: WBS Attached to Project' gv_proj '-'
                   into  gv_write.
            WRITE / gv_write. CLEAR gv_write.
            move-corresponding wa_wbs_elem to gt_wbs_suc.
            gt_wbs_suc-message      = gv_write.
            gt_wbs_suc-message_type = 'S'.
            append gt_wbs_suc.
            format reset.
          else.
            clear gv_write.
            format color 6.
            CONCATENATE 'Error: Attaching WBS to Project' space '-' space gv_proj
                    into  gv_write.
            WRITE / gv_write. CLEAR gv_write.
            move-corresponding wa_wbs_elem to gt_wbs_suc.
            gt_wbs_suc-message      = gv_write.
            gt_wbs_suc-message_type = 'E'.
            append gt_wbs_suc.
            format reset.
          ENDIF.
          CLEAR: gv_proj, gt_wbs_element_bapi, gt_extension_in, gt_return_pre, GT_RETURN_COMMIT.
          refresh: gt_wbs_element_bapi, gt_extension_in, gt_return_pre, GT_RETURN_COMMIT.
        endat.
      ENDLOOP.

    Hi Trim
    The message comes from (It is S004) in the BAPI_BUS2054_CREATE_MULTI (see below bold).  I would be placing a breakpoint at the point where the BAPI is calling the function PS_BAPI_PREPARE to see if the lv_subrc variable is being set at this stage.
    Cheers
    Gareth
          call function 'CJ2054_CREATE'
               exporting
                    i_pspid             = i_project_definition
                    i_wbs_element       = ls_wbs_element
               tables
                    extensionin         = extensionin
               exceptions
                    element_not_created = 1
                    dates_not_created   = 2.
        endif.
        if sy-subrc <> 0 or lv_subrc <> 0.
          message e007(cnif_pi) with text-wbs ls_wbs_element-wbs_element
                                into null.
          lv_error = con_yes.
        else.
    <b>      message s004(cnif_pi) with text-wbs ls_wbs_element-wbs_element
                                into null.</b>

  • Issue in Updating Customer specific fields in WBS using BAPI_BUS2054_CREATE_MULTI

    Hi Experts,
    I am able to create the WBS element using BAPI_BUS2054_CREATE_MULTI.But the issue is i am not able to update customer specific fields even after passing the fields as per specification in Function module documentation. I have also created an implementation of BADI  as per below specification in FM documentation:
    Procedure for Filling Standard Enhancements
    Before you call the BAPI for each object that is to be created or changed,
    for which you want to enter customer-specific table enhancement fields, add a
    data record to the container ExtensionIn:
    STRUCTURE:    Name of the corresponding help structure
    VALUEPART1:   Key of the object + start of the data part
    VALUEPART2-4: If required, the continuation of the data part
    VALUPART1 to VALUPART4 are therefore filled consecutively, first with the
    keys that identify the table rows and then with the values of the
    customer-specific fields. By structuring the container in this way, it is
    possible to transfer its content with one MOVE command to the structure of the
    BAPI table extension.
    Note that when objects are changed, all fields of the enhancements are
    overwritten (as opposed to the standard fields, where only those fields for
    which the respective update indicator is set are changed). Therefore, even if
    you only want to change one field, all the fields that you transfer in
    ExtensionIn must be filled.
    Checks and Further Processing
    Using the methods ...CREATE_EXIT1 or. ...CHANGE_EXIT1 of the BAdI
    BAPIEXT_BUS2001, BAPIEXT_BUS2002, and BAPIEXT_BUS2054, you can check the entered
    values (and/or carry out other checks).
    In the BAdI's second method, you can program that the data transferred to the
    BAPI is processed further (if you only want to transfer the fields of the CI
    includes, no more action is required here).
    But still i am unable to update fields though i am able to create WBS with rest of the fields except  custom fields.
    I am using attached code to achieve this.Do we need to code anything inside method create_exit1 or create_exit2 implementation for BADI or
    Please help on priority

    Hi Rahul,
    First observation from your code is that i could not find the assignment for the field
    GWA_WBS_EXTIN-STRUCTURE. I hope you are not filling this field, that could be one reason for failure. Try to do it and let us know if you still have the problem.
    Br..
    Dwaraka

  • Clarification on Data Guard(Physical Standyb db)

    Hi guys,
    I have been trying to setup up Data Guard with a physical standby database for the past few weeks and I think I have managed to setup it up and also perform a switchover. I have been reading a lot of websites and even Oracle Docs for this.
    However I need clarification on the setup and whether or not it is working as expected.
    My environment is Windows 32bit (Windows 2003)
    Oracle 10.2.0.2 (Client/Server)
    2 Physical machines
    Here is what I have done.
    Machine 1
    1. Create a primary database using standard DBCA, hence the Oracle service(oradgp) and password file are also created along with the listener service.
    2. Modify the pfile to include the following:-
    oradgp.__db_cache_size=436207616
    oradgp.__java_pool_size=4194304
    oradgp.__large_pool_size=4194304
    oradgp.__shared_pool_size=159383552
    oradgp.__streams_pool_size=0
    *.audit_file_dest='M:\oracle\product\10.2.0\admin\oradgp\adump'
    *.background_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\bdump'
    *.compatible='10.2.0.3.0'
    *.control_files='M:\oracle\product\10.2.0\oradata\oradgp\control01.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control02.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control03.ctl'
    *.core_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_name='oradgp'
    *.db_recovery_file_dest='M:\oracle\product\10.2.0\flash_recovery_area'
    *.db_recovery_file_dest_size=21474836480
    *.fal_client='oradgp'
    *.fal_server='oradgs'
    *.job_queue_processes=10
    *.log_archive_dest_1='LOCATION=E:\ArchLogs VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=oradgp'
    *.log_archive_dest_2='SERVICE=oradgs LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=oradgs'
    *.log_archive_format='ARC%S_%R.%T'
    *.log_archive_max_processes=30
    *.nls_territory='IRELAND'
    *.open_cursors=300
    *.pga_aggregate_target=203423744
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=612368384
    *.standby_file_management='auto'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\udump'
    *.service_names=oradgp
    The locations on the harddisk are all available and archived redo are created (e:\archlogs)
    3. I then add the necessary (4) standby logs on primary.
    4. To replicate the db on the machine 2(standby db), I did an RMAN backup as:-
    RMAN> run
    {allocate channel d1 type disk format='M:\DGBackup\stby_%U.bak';
    backup database plus archivelog delete input;
    5. I then copied over the standby~.bak files created from machine1 to machine2 to the same directory (M:\DBBackup) since I maintained the directory structure exactly the same between the 2 machines.
    6. Then created a standby controlfile. (At this time the db was in open/write mode).
    7. I then copied this standby ctl file to machine2 under the same directory structure (M:\oracle\product\10.2.0\oradata\oradgp) and replicated the same ctl file into 3 different files such as: CONTROL01.CTL, CONTROL02.CTL & CONTROL03.CTL
    Machine2
    8. I created an Oracle service called the same as primary (oradgp).
    9. Created a listener also.
    9. Set the Oracle Home & SID to the same name as primary (oradgp) <<<-- I am not sure about the sid one.
    10. I then copied over the pfile from the primary to standby and created an spfile with this one.
    It looks like this:-
    oradgp.__db_cache_size=436207616
    oradgp.__java_pool_size=4194304
    oradgp.__large_pool_size=4194304
    oradgp.__shared_pool_size=159383552
    oradgp.__streams_pool_size=0
    *.audit_file_dest='M:\oracle\product\10.2.0\admin\oradgp\adump'
    *.background_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\bdump'
    *.compatible='10.2.0.3.0'
    *.control_files='M:\oracle\product\10.2.0\oradata\oradgp\control01.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control02.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control03.ctl'
    *.core_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_name='oradgp'
    *.db_recovery_file_dest='M:\oracle\product\10.2.0\flash_recovery_area'
    *.db_recovery_file_dest_size=21474836480
    *.fal_client='oradgs'
    *.fal_server='oradgp'
    *.job_queue_processes=10
    *.log_archive_dest_1='LOCATION=E:\ArchLogs VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=oradgs'
    *.log_archive_dest_2='SERVICE=oradgp LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=oradgp'
    *.log_archive_format='ARC%S_%R.%T'
    *.log_archive_max_processes=30
    *.nls_territory='IRELAND'
    *.open_cursors=300
    *.pga_aggregate_target=203423744
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=612368384
    *.standby_file_management='auto'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\udump'
    *.service_names=oradgs
    log_file_name_convert='junk','junk'
    11. User RMAN to restore the db as:-
    RMAN> startup mount;
    RMAN> restore database;
    Then RMAN created the datafiles.
    12. I then added the same number (4) of standby redo logs to machine2.
    13. Also added a tempfile though the temp tablespace was created per the restore via RMAN, I think the actual file (temp01.dbf) didn't get created, so I manually created the tempfile.
    14. Ensuring the listener and Oracle service were running and that the database on machine2 was in MOUNT mode, I then started the redo apply using:-
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
    It seems to have started the redo apply as I've checked the alert log and noticed that the sequence# was all "YES" for applied.
    ****However I noticed that in the alert log the standby was complaining about the online REDO log not being present****
    So copied over the REDO logs from the primary machine and placed them in the same directory structure of the standby.
    ########Q1. I understand that the standby database does not need online REDO Logs but why is it reporting in the alert log then??########
    I wanted to enable realtime apply so, I cancelled the recover by :-
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
    and issued:-
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
    This too was successful and I noticed that the recovery mode is set to MANAGED REAL TIME APPLY.
    Checked this via the primary database also and it too reported that the DEST_2 is in MANAGED REAL TIME APPLY.
    Also performed a log swith on primary and it got transported to the standby and was applied (YES).
    Also ensured that there are no gaps via some queries where no rows were returned.
    15. I now wanted to perform a switchover, hence issued:-
    Primary_SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN;
    All the archivers stopped as expected.
    16. Now on machine2:
    Stdby_SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;
    17. On machine1:
    Primary_Now_Standby_SQL>SHUTDOWN IMMEDIATE;
    Primary_Now_Standby_SQL>STARTUP MOUNT;
    Primary_Now_Standby_SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
    17. On machine2:
    Stdby_Now_Primary_SQL>ALTER DATABASE OPEN;
    Checked by switching the logfile on the new primary and ensured that the standby received this logfile and was applied (YES).
    However, here are my questions for clarifications:-
    Q1. There is a question about ONLINE REDO LOGS within "#" characters.
    Q2. Do you see me doing anything wrong in regards to naming the directory structures? Should I have renamed the dbname directory in the Oracle Home to oradgs rather than oradgp?
    Q3. When I enabled real time apply does that mean, that I am not in 'MANAGED' mode anymore? Is there an un-managed mode also?
    Q4. After the switchover, I have noticed that the MRP0 process is "APPLYING LOG" status to a sequence# which is not even the latest sequence# as per v$archived_log. By this I mean:-
    SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS,FROM V$MANAGED_STANDBY;
    MRP0 APPLYING_LOG 1 47 452 1024000
    but :
    SQL> select max(sequence#) from v$archived_log;
    46
    Why is that? Also I have noticed that one of the sequence#s is NOT applied but the later ones are:-
    SQL> SELECT SEQUENCE#,APPLIED FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;
    42 NO
    43 YES
    44 YES
    45 YES
    46 YES
    What could be the possible reasons why sequence# 42 didn't get applied but the others did?
    After reading several documents I am confused at this stage because I have read that you can setup standby databases using 'standby' logs but is there another method without using standby logs?
    Q5. The log switch isn't happening automatically on the primary database where I could see the whole process happening on it own, such as generation of a new logfile, that being transported to the standby and then being applied on the standby.
    Could this be due to inactivity on the primary database as I am not doing anything on it?
    Sorry if I have missed out something guys but I tried to put in as much detail as I remember...
    Thank you very much in advance.
    Regards,
    Bharath
    Edited by: Bharath3 on Jan 22, 2010 2:13 AM

    Parameters:
    Missing on the Primary:
    DB_UNIQUE_NAME=oradgp
    LOG_ARCHIVE_CONFIG=DG_CONFIG=(oradgp, oradgs)
    Missing on the Standby:
    DB_UNIQUE_NAME=oradgs
    LOG_ARCHIVE_CONFIG=DG_CONFIG=(oradgp, oradgs)
    You said: Also added a tempfile though the temp tablespace was created per the restore via RMAN, I think the actual file (temp01.dbf) didn't get created, so I manually created the tempfile.
    RMAN should have also added the temp file. Note that as of 11g RMAN duplicate for standby will also add the standby redo log files at the standby if they already existed on the Primary when you took the backup.
    You said: ****However I noticed that in the alert log the standby was complaining about the online REDO log not being present****
    That is just the weird error that the RDBMS returns when the database tries to find the online redo log files. You see that at the start of the MRP because it tries to open them and if it gets the error it will manually create them based on their file definition in the controlfile combined with LOG_FILE_NAME_CONVERT if they are in a different place from the Primary.
    Your questions (Q1 answered above):
    You said: Q2. Do you see me doing anything wrong in regards to naming the directory structures? Should I have renamed the dbname directory in the Oracle Home to oradgs rather than oradgp?
    Up to you. Not a requirement.
    You said: Q3. When I enabled real time apply does that mean, that I am not in 'MANAGED' mode anymore? Is there an un-managed mode also?
    You are always in MANAGED mode when you use the RECOVER MANAGED STANDBY DATABASE command. If you use manual recovery "RECOVER STANDBY DATABASE" (NOT RECOMMENDED EVER ON A STANDBY DATABASE) then you are effectively in 'non-managed' mode although we do not call it that.
    You said: Q4. After the switchover, I have noticed that the MRP0 process is "APPLYING LOG" status to a sequence# which is not even the latest sequence# as per v$archived_log. By this I mean:-
    Log 46 (in your example) is the last FULL and ARCHIVED log hence that is the latest one to show up in V$ARCHIVED_LOG as that is a list of fully archived log files. Sequence 47 is the one that is current in the Primary online redo log and also current in the standby's standby redo log and as you are using real time apply that is the one it is applying.
    You said: What could be the possible reasons why sequence# 42 didn't get applied but the others did?
    42 was probably a gap. Select the FAL columns as well and it will proably say 'YES' for FAL. We do not update the Primary's controlfile everytime we resolve a gap. Try the same command on the standby and you will see that 42 was indeed applied. Redo can never be applied out of order so the max(sequence#) from v$archived_log where applied = 'YES' will tell you that every sequence before that number has to have been applied.
    You said: After reading several documents I am confused at this stage because I have read that you can setup standby databases using 'standby' logs but is there another method without using standby logs?
    Yes, If you do not have standby redo log files on the standby then we write directly to an archive log. Which means potential large data loss at failover and no real time apply. That was the old 9i method for ARCH. Don't do that. Always have standby redo logs (SRL)
    You said: Q5. The log switch isn't happening automatically on the primary database where I could see the whole process happening on it own, such as generation of a new logfile, that being transported to the standby and then being applied on the standby.
    Could this be due to inactivity on the primary database as I am not doing anything on it?
    Log switches on the Primary happen when the current log gets full, a log switch has not happened for the number of seconds you specified in the ARCHIVE_LAG_TARGET parameter or you say ALTER SYSTEM SWITCH LOGFILE (or the other methods for switching log files. The heartbeat redo will eventually fill up an online log file but it is about 13 bytes so you do the math on how long that would take :^)
    You are shipping redo with ASYNC so we send the redo as it is commited, there is no wait for the log switch. And we are in real time apply so there is no wait for the log switch to apply that redo. In theroy you could create an online log file large enough to hold an entire day's worth of redo and never switch for the whole day and the standby would still be caught up with the primary.

  • Access to trunk port clarification

    Hello-
    I am looking to clarify a point of confusion for myself regrading connecting an access port to a trunk port. Consider the following switchport config on switch1:
    Switch#1
    interface GigabitEthernet0/5
     switchport
     switchport access vlan 6
    ....and the corresponding config on it's neighbor:
    Switch#2
    Interface GigabitEthernet10/8
    switchport
    switchport mode trunk
    switchport trunk allowed vlan 1,6,100
    My first question is- Is this a valid configuration? Secondly, what would the expected results be? I am curious about what vlans would be allowed to pass through..
    Thanks in advance-
    Brian

    This would work fine but not recommended.
    Also the traffic between the switches would be only Native Vlan and vlan 6 will pass through.
    SW1-----F0/1----------f0/1----SW2
    SW1#sh int trunk 
    Port        Mode         Encapsulation  Status        Native vlan
    Fa0/1       auto         n-802.1q       trunking      1
    Port        Vlans allowed on trunk
    Fa0/1       1-1005
    Port        Vlans allowed and active in management domain
    Fa0/1       1,6
    Port        Vlans in spanning tree forwarding state and not pruned
    Fa0/1       1,6
    SW1#
    SW2
    SW2#sh int trunk 
    Port        Mode         Encapsulation  Status        Native vlan
    Fa0/1       on           802.1q         trunking      1
    Port        Vlans allowed on trunk
    Fa0/1       1,6,100
    Port        Vlans allowed and active in management domain
    Fa0/1       1,6,100
    Port        Vlans in spanning tree forwarding state and not pruned
    Fa0/1       1,6,100
    SW2#
    2) Part of this config is that any vlans which are been configured under the SW1 would be allowed through that access port.
    ex:
    SW1#sh int trunk 
    Port        Mode         Encapsulation  Status        Native vlan
    Fa0/1       auto         n-802.1q       trunking      1
    Port        Vlans allowed on trunk
    Fa0/1       1-1005
    Port        Vlans allowed and active in management domain
    Fa0/1       1,6,10,20,30,40,50,60,70,80,90,100
    Port        Vlans in spanning tree forwarding state and not pruned
    Fa0/1       1,6,10,20,30,40,50,60,70,80,90,100 ...>>>>>>>>>>all vlans are allowed here.
    b)
    Were as on Switch 2 if you create all these vlans and u dont allow that to go through the trunk interface which you have configured those vlans would nt be flowing through.
    eg;
    SW2#sh int tr
    Port        Mode         Encapsulation  Status        Native vlan
    Fa0/1       on           802.1q         trunking      1
    Port        Vlans allowed on trunk
    Fa0/1       1,6,100
    Port        Vlans allowed and active in management domain
    Fa0/1       1,6,100
    Port        Vlans in spanning tree forwarding state and not pruned
    Fa0/1       1,6,100>>>>>>>>>>>>>>>.Only 3 vlans would be flowing through due to explicit defined. but if you defined allowed all then all vlans would be shown here.
    i created all the vlans above on sw2 but you can see only 3 vlans are allowd as you have explicitly defined it.
    Hope this clarifies your query.
    Regards
    Inayath
    *************Plz dont forget to rate posts***********

  • ZCM 11 SP3 and Windows 8.1, need clarification...

    I dont know if I can't read, but I need som clarification...
    When SP3 arrived, there were problems with Win 8.1.
    I have customers that have machines, that were upgraded to 8.1 and it have worked with the already installed 11.2.3a -agent, even if its not supported. They deployed ZCM 11.3-agent to this machines that were upgraded to 8.1, and they all crached, never started again. I made tests in my test-environment, same result.
    Last week I started to check and found the update "ZENworks 11.3.0a Windows 8.1 Update" (https://www.novell.com/support/kb/doc.php?id=7014805 / http://download.novell.com/Download?...d=0yMdXrTonF8~). Downloaded that yesterday, and was to test it today. Looked for the instructions again today, and now its not availble anymore, Obsolete. I have not deplyed it yet, so that is no problem.
    There are since yesterday a new update, "ZCM 11.3.0 WIN8.1 Patch 866736" (http://download.novell.com/Download?...d=OvBLs9qZhrU~).
    But in the instructions on this, it talks about it talks about how to update machines already updated. Here the text mentions "11.3.0_WIN8.1". What is that? Is it the now Obsolete "ZENworks 11.3.0a Windows 8.1 Update"? Or machined patched with the standard "Update for ZENworks (11 SP3)", that was created during SP3 install? If reffering to the last, it cant be done because in both my customers production environment and my testenvironment the Win 8.1 machined crached and never came up.
    Next two methods in the description describes how to update "Windows 8.1 Update for ZENworks (11 SP3)", if already imported (zman supf "Windows 8.1 Update for ZENworks (11 SP3)" ZCM_11.3.0_WIN8.1_20140404_866736.zip).
    But for those who have not imported that, what is the way to go? You cant download "Windows 8.1 Update for ZENworks (11 SP3)" anymore, and the patch seems to be for that one?
    What is the way to get Win 8.1 to works with ZCM 11.3?
    Can anyone clarify this?
    I dont know if this belongs in agent forum or server-forum, but I start here.
    /Stefan

    CRAIGDWILSON wrote:
    > New Versions of those patches are being rolled out.
    >
    > Normally it happens at the same time, but looks like timing was
    > slightly off.
    >
    >
    >
    > On 4/30/2014 11:31 AM, Niels Poulsen wrote:
    > > stesjo wrote:
    > >
    > > >
    > > > So, at this point, it means that there is no way to get ZCM 11.3
    > > > to work with Windows 8.1?
    > > >
    > > > Old patches removed, and just patches for the removed patches
    > > > available?
    > > >
    > > > One can assume, that there is a reasom why the 8.1 Update is
    > > > removed, and made obsolete?
    > > > /Stefan
    > >
    > > ... One would think so, yes. Not sure what's the reason...
    > >
    Cool :-)
    Niels
    A true red devil...
    If you find this post helpful, please show your appreciation by
    clicking on the star below
    A member must be logged in before s/he can assign reputation points.

  • Clarification on Time Machine migration

    I am about to upgrade my trusty PowerBook G4 to a shiny new MacBook Pro and would just like some clarification on accessing my Time Machine files from my new computer. After reading some similar posts it sounds like some users have been able to access their files from their old system on new computers while others have had issues. If I transfer all of the contents of my old system to the new one via Migration Assistant, will Time Machine recognize the new computer as the old one? Should I do the transfer via Time Machine instead? Any assistance would be greatly appreciated. Thanks!

    Before you start migrating, be sure to deactivate/deauthorize the software on your PowerBook. If you'll have both computers in place, it is faster I think to put the PowerBook into firewire target disk mode then to use TimeMachine. Further, regardless of which method you use, because this is a PPC to Intel upgrade, I'd recommend that you only transfer the contents of your personal drive space and reinstall your software.
    There are two reasons I recommend reinstalling software: First, buying a new computer is about the only time I get rid of the cruft and junk that I accumulate and almost never use afterwards. I figure if it is a good idea for me, it is a good idea for others . Second, having done dozens of upgrades, I have a decent idea of what can be safely transferred and what can't but long ago I figured it took almost as long to pick and choose what to transfer as it did to just reinstall everything.

  • Clarification on Bootcamp 3.1 install for Windows 7 Support

    Hello all,
    I'm looking for some expert opinon on how the BootCamp 3.1 update is supposed to be performed.
    As some 3rd party websites state that these updates need to be applied prior to the Windows 7 install. While Apple's online documentation talks about installing 7 and then doing the update.
    The reason why I think that apple's process is more accurate is because the 64-Bit driver download is an ".exe" extension. However before I do anything with my MacBook Pro I wanted to find out what others are doing for this OS install process.
    If you could clarify the process it would be appreicated as the Software Update feature in Mac OS X doesn't seem to find any updates related to BootCamp.
    Thanks in advance

    So what you are saying is that even though the bootcamp utility exists and has already allocated my spot for Windows 7 on Snow Leopard Boot camp doesn't exist thus I can't get the 3.1 update yet?
    I'm very familiar with computing terms, and setting up devices... However no documentation really clarifies the whole 3.1 process. All that I can see from apple's perspective is to allocate the bootcamp space. Run the windows 7 install, and then use the snow leopard update and the assoicated drivers (32/64 bit).
    However even with the Mac OS X online, I can't figure out how you get that update as each site talks differently.
    If someone can clarify exactly from an out of box Mac, to snow leopard boot camp allocation and windows 7 install for these 3.1 updates it would greatly help with my confusion points here.
    Thanks!

Maybe you are looking for