Service master and service description

Hi,
*Why we are maintain service master ? Why only we don't maintain description in PO*
Because account determination possible through material group valuation class.
Why we maintain service master?
*Regards*
*Anil*
Edited by: Anil Patil on Feb 9, 2010 9:17 PM

Hi
Using Service masters is mainly for reporting and good use of Service Masters.
You can later run reports to show you how much you spend on a certain service.
Many Thanks
Silas

Similar Messages

  • SERVICE  PO using service master and service PO using Material Type DIEN

    Dear Gurus can you differenciate between SERVICE  PO using service master and service PO using Material Type DIEN ?
    Regards
    Vinod Suresh Kakade

    Hi,
    Material type DIEN is used for when you offer "SERVICE" to your customer.
    When you are procuring "SERVICE" from your vendor, you can use external service management where service master is used (Steps are PO created in ME21N with Item Cat. "D" with Acc. ***. Cat. "K" or "C", ML81N, MIRO & F-53).
    In Standard DIEN material type designed for Sales usage only, but you can use DIEN for Purchase also to avoid External Services Management.
    Regards,
    Biju K

  • Advantages of Service Master and service acceptance sheet

    Hi,
    I want to know the Pros and cons of Service master and service acceptance sheet for preparing a document.
    Pl give me your views on this.
    Regards,
    R. Dillibabu.

    Dear Dillibabu,
    When you create a service master you avoid repetetive data entry efforts as again and again you do not require keying in the same text for services which can be volumunous. Also you have the option to group the services using the service/material group which can be done in service master. Also you have the option to assign valuation class in the service master for account determination purpose.
    When you do the service entry it is just like the Goods receipt even in the back ground a material document is create with movement type 101 which you can see in the PO history. The biggest advantage of using the service entry is the financial implication. When service entry is done A g/l account for ex service account is debited and GR/IR is credited. Which is then cleared at Invoice posting and liability created towards vendor.
    If you do not do a service entry you will have to track these manually as to services of what value is performed and inform to finance.
    I hope this helps.
    Thanks and regards,
    Siddharth

  • Service master and  material type 'service'

    Hi everybody.
    I am a bit confused about "Service"
    There is a service master that we can use to store and procure service.
    Also we can use material master to store 'service' using material type 'service' and procure service in that way.
    I want to know what is the difference between the two ? When shall I use what ?

    Hi
    If you do not want to use 'External Service Management', you can use materials of type Service (DIEN).
    This will be similar to any other account assigned material PO with free text. Instead of using free text, you are using material master and also helps you in identifyign service purchases from other material purchases. You will not use item category 'D' in this case. You do GR here.
    Service masters are used when you use item category 'D' ('External Service Management'). There are additional flexibility of defining service specifications, service outlines, using service entry sheet etc here.
    Best regards
    Ramki

  • Mail Service and one Opendirectory Master and two failed Replicas

    Hi.
    I'm experiencing a problem with the email service.
    I have one OD master and 4 replicas connected via VPN Cisco Pix, everything was working fine until we receive the "new" G5 server to change the old g4 server.
    My old G4 was the OD master and I was planning to change the G4 server to the G5 to be the new OD master, for that I follow some instructions in the forum and I connect the G5 as a replica of the G4 and after that I shutdown the G4 and I change the G5 to master ( I change the name and Ip address to ) and I rebuild all the replicas to the G5 and magically everything woks perfect after.
    After two weeks we was having internet link problems with two of the replicas installed in other countries because of that the replicas was down for about two weeks.
    After the reestablish of the internet links all the user with HOME DIRECTORIES in the replicas BUT with email in the OD master was not receiving any email, but in the email client they was not receiving any error, I check the postfix queue and I realize that all the emails was in the queue waiting for delivery and with error messages like "ltmp connection error" ( something like that ) and so on.
    Checking the documentation from apple and forums and so on, I realize that the problem was the replicas , the log message in the Password Replica Service log was " Password Service Not Found" and I rebuild again this two replicas but the problem was there again.
    Today I made a test with one account and I point the home directory to the OD master and ALL the email where send to the mailbox right away.
    When I sow this I starting checking with the inspector in WGM to try to find a variable or any option to disable this problem and I realize that in the Config-passwordservice and in the replica list there is the old ip of the G5 before coming to be the master and another replica that doesn't exist.
    My questions are :
    1.- How can I clean this XML file to fix this ?
    2.- Why the email service stop working even that the email service is in another server ( not the replica) ?
    3.- There is a way to disable or change any option in any XML or Ldap file for this sync issue ?
    4.- How can I redo the OD Master without lose the user/password list to have a clean XML configuration file ?( or the correct way to do this kind of changes )
    Thanks a lot.

    Very good I was trying to find some kind of info thanks.
    Checking my file in the OD master I realize that not look the same, please take a look.
    Check the Bold comments
    <?xml-version="1.0"-encoding="UTF-8"?>
    <!DOCTYPE-plist-PUBLIC-"-//Apple-Computer//DTD-PLIST-1.0//EN"-"http://www.apple. com/DTDs/PropertyList-1.0.dtd">
    <plist-version="1.0">
    <dict>
    --------<key>DecommissionedReplicas</key>
    ----------------<array>
    ----------------_*<string>Replica2</string> This list don't mach with the replicas*_
    ----------------<string>Replica3</string>
    ----------------<string>Replica4</string>
    ----------------<string>Replica6</string>
    --------</array>
    --------<key>ID</key>
    --------<string>A20EA3E0AB31C9E61FD65069E21E55A0</string>
    --------<key>Parent</key>
    --------<dict>
    ----------------<key>EntryModDate</key>
    ----------------<date>2007-09-18T04:41:37Z</date>
    ----------------<key>IDRangeBegin</key>
    ----------------<string>0x00000000000000000000000000002491</string>
    ----------------<key>IDRangeEnd</key>
    ----------------<string>0x00000000000000000000000000002685</string>
    ----------------<key>IP</key>
    ----------------<array>
    ------------------------_*<string>172.16.10.7</string> this is the old ip addres before to be the OD master*_
    ------------------------<string>172.16.10.5</string>
    ----------------</array>
    ----------------<key>LastSyncDate</key>
    ----------------<date>2007-09-20T17:25:22Z</date>
    ----------------<key>ReplicaPolicy</key>
    ----------------<string>SyncDefault</string>
    --------</dict>
    --------<key>Replicas</key>
    --------<array>
    ----------------<dict>
    ------------------------<key>EntryModDate</key>
    ------------------------<date>2007-09-18T04:43:53Z</date>
    ------------------------<key>IDRangeBegin</key>
    ------------------------<string>0x00000000000000000000000000000209</string>
    ------------------------<key>IDRangeEnd</key>
    ------------------------<string>0x000000000000000000000000000003fd</string>
    ------------------------<key>IP</key>
    ------------------------<string>172.xx.12.5</string>
    ------------------------<key>LastSyncDate</key>
    ------------------------<date>2007-09-20T15:31:17Z</date>
    ------------------------<key>LastSyncFailedAttempt</key>
    ------------------------<date>2007-09-18T04:43:40Z</date>
    ------------------------<key>ReplicaName</key>
    ------------------------<string>Replica1</string>
    ------------------------<key>ReplicaPolicy</key>
    ------------------------<string>SyncDefault</string>
    ------------------------<key>ReplicaStatus</key>
    ------------------------<string>NotFound</string>
    ------------------------<key>SASLRealm</key>
    ------------------------<string>localhost</string>
    ------------------------<key>SyncInterval</key>
    ------------------------<integer>300</integer>
    ----------------</dict>
    ----------------<dict>
    ------------------------<key>EntryModDate</key>
    ------------------------<date>2007-09-20T15:31:32Z</date>
    ------------------------<key>IDRangeBegin</key>
    ------------------------<string>0x00000000000000000000000000001861</string>
    ------------------------<key>IDRangeEnd</key>
    ------------------------<string>0x00000000000000000000000000001a55</string>
    ------------------------<key>IP</key>
    ------------------------_*<string>172.xx.10.169</string> This not exist long time ago*_
    ------------------------<key>LastSyncFailedAttempt</key>
    ------------------------<date>2007-09-20T15:31:17Z</date>
    ------------------------<key>ReplicaName</key>
    ------------------------<string>Replica5</string>
    ------------------------<key>ReplicaPolicy</key>
    ------------------------<string>SyncDefault</string>
    ------------------------<key>ReplicaStatus</key>
    ------------------------<string>NotFound</string>
    ------------------------<key>SASLRealm</key>
    ------------------------<string>chile.dilbrands.com</string>
    ------------------------<key>SyncInterval</key>
    ------------------------<integer>86400</integer>
    ----------------</dict>
    ----------------<dict>
    ------------------------<key>EntryModDate</key>
    ------------------------<date>2007-09-20T05:00:55Z</date>
    ------------------------<key>IDRangeBegin</key>
    ------------------------<string>0x00000000000000000000000000002289</string>
    ------------------------<key>IDRangeEnd</key>
    ------------------------<string>0x0000000000000000000000000000247d</string>
    ------------------------<key>IP</key>
    ------------------------<string>172.xx.13.5</string>
    ------------------------<key>LastSyncDate</key>
    ------------------------<date>2007-09-20T15:31:17Z</date>
    ------------------------<key>LastSyncFailedAttempt</key>
    ------------------------<date>2007-09-20T05:00:45Z</date>
    ------------------------<key>ReplicaName</key>
    ------------------------<string>Replica7</string>
    ------------------------<key>ReplicaPolicy</key>
    ------------------------<string>SyncDefault</string>
    ------------------------<key>ReplicaStatus</key>
    ------------------------<string>NotFound</string>
    ------------------------<key>SASLRealm</key>
    ------------------------<string>localhost</string>
    ------------------------<key>SyncInterval</key>
    ------------------------<integer>300</integer>
    ----------------</dict>
    ----------------<dict>
    ------------------------<key>EntryModDate</key>
    ------------------------<date>2007-09-18T01:33:32Z</date>
    ------------------------<key>IDRangeBegin</key>
    ------------------------<string>0x00000000000000000000000000002699</string>
    ------------------------<key>IDRangeEnd</key>
    ------------------------<string>0x0000000000000000000000000000288d</string>
    ------------------------<key>IP</key>
    ------------------------<string>172.xx.10.6</string>
    ------------------------<key>LastSyncDate</key>
    ------------------------<date>2007-09-20T17:25:22Z</date>
    ------------------------<key>ReplicaName</key>
    ------------------------<string>Replica8</string>
    ------------------------<key>ReplicaPolicy</key>
    ------------------------<string>SyncOnSchedule</string>
    ------------------------<key>SASLRealm</key>
    ------------------------<string>localhost</string>
    ------------------------<key>SyncInterval</key>
    ------------------------<integer>7200</integer>
    ----------------</dict>
    ----------------<dict>
    ------------------------<key>EntryModDate</key>
    ------------------------<date>2007-09-20T15:31:49Z</date>
    ------------------------<key>IDRangeBegin</key>
    ------------------------<string>0x000000000000000000000000000028a1</string>
    ------------------------<key>IDRangeEnd</key>
    ------------------------<string>0x00000000000000000000000000002a95</string>
    ------------------------<key>IP</key>
    ------------------------_*<string>172.xx.11.101</string> this allways fail but network and dns work perfect.*_
    ------------------------<key>LastSyncFailedAttempt</key>
    ------------------------<date>2007-09-20T15:31:17Z</date>
    ------------------------<key>ReplicaName</key>
    ------------------------<string>Replica9</string>
    ------------------------<key>ReplicaStatus</key>
    ------------------------<string>NotFound</string>
    ------------------------<key>SASLRealm</key>
    ------------------------<string>localhost</string>
    ------------------------<key>SyncInterval</key>
    ------------------------<integer>300</integer>
    ----------------</dict>
    --------</array>
    --------<key>Status</key>
    --------<string>AllowReplication</string>
    </dict>
    </plist>
    Thanks again.

  • Current Date in Service Description of ESS Services

    Hi All,
              We want to display current date in the Text of Service Description.
    In SPRO,  -> personnel Management -> ESS -> General Settings -> Home page for ESS -> Services -> Define Services, I tried using &DATE& but it shows the date, 28.02.2011 and not the current date i.e 14.03.2011.
    The server date is correct, i.e. 14.03.2011
    Can you please help, how can I display current date in the service description.
    Cheers!!!
    Umang

    Hi Umang,
    You can use Proxy class to append the time dynamically to the service description.
    Refer to the blog
    /people/amir.madani/blog/2007/01/05/create-dynamic-xss-homepages-with-static-services-using-a-simple-proxy-class
    Revert if you face any difficulty with this.
    Thanks
    Prashant

  • Service Description Text

    Within Define Services of the Homepage Framework for ESS, I have added text to my 'Service Description' for Who's Who
    The text is one sentance (all one line) in ERP.  When I review the text in ESS for the Service, the text appears on several lines.
    For example in ERP Homepage Framework:
    Search for all employees by name and find basic information about colleagues.
    In ESS the same text looks like this:
    Search for all employees by name
    and find basic information
    about colleagues.
    How do I resolve this problem?
    Thanks
    WB

    Go into the PCD, and select the relevant iView and press preview.
    Then Press Ctrl and Right Click of the mouse to bring up the details below.
    Select Text Wrapping and change tou2018Yesu2019.
    Then select u2018Textu2019 and ensure that all the Descriptive Text has spaces between each word.
    Then press u2018Applyu2019

  • ESS service description on EP

    Hi,
    We are implemeting Webdynpro ESS in ERP2004. We have a requirement where we have to provide a hyperlink in a service description. I am using relevant IMG of homepage framework to insert the description but it only uploads text files. Is there a way to insert a service description with hyperlink?
    Thank you.
    Regards,
    Abhijeet.

    Ok....so just to follow you....
    (1) Make a resource for the URL link (like for whatever URL the "Additional Info" is located...and you should already have one for your actual service (like Benefits Overview).
    (2) Assign the resource to a service which should be the one to display the link ("Additional Info").
    (3) Assign the service (and the service for Benefits Overview for example) to the <b>SAME</b> "subarea" with "Benefits Overview" given an order of say 10 and the "Additional Info" service given an order of 20...that makes them appear correctly (ie. stacked).
    (4) Assign your subarea to an area (probably already done since it was there for the "Benefits Overview").
    (5) Make sure your changes are moved to the right clients (since the config is client dependent) if in the same instance. For example, if you config in say client 030 but your data/unit test is from 035, log into client 035, run t-code SCC1 and pull over your transport from 030 (it's ok...you can do this as much as you want without releaseing the transport).
    Now...you do NOT have to restart the portal. Your change will not show immediately, but that is ok. Have your portal admin go into "System Admin->Support->Portal Content Directory" and click the "administration" link towards the bottom...then click the "release cache" button. This will clear the cache and next time you access yourpages, they will reload with your new changes. Just that easy! =
    Hope this helps.

  • SSIS package is failing in SQL agent job with webserviceTaskException: Service Description cannot be null

    Hi All
    we are using webservice task in our ssis package and the package is successful in SSDT . when we created SQL job using that package it is failing with  webserviceTaskException: Service Description cannot be null.
    we have given access to web service for ssis proxy account  and have access on wsdl file folder  and given access to temp folder .
    what would be the reason for the failure?
    Surendra Thota

    Hi Surenda,
    As per my understanding, the error message is too general. In order to troubleshoot this issue, we should view the detail error message about this job. For more details, please see:
    Troubleshooting Jobs
    SQL Server Agent Error Log
    Besides, since when you call a Microsoft SQL Server Integration Services (SSIS) package outside a SQL Server Agent job step, the SSIS package runs successfully. However, if you do not modify the SSIS package, then execute it via SQL Server Agent job step,
    it fails. The scenario always related to the user account that is used to run the package under SQL Server Agent. Please also verify that the account has access to the Web site or to the Web Service Description Language (WSDL) file for HTTP connection manager.
    References:
    SSIS package does not run when called from a SQL Server Agent job step
    Example using Web Services with SQL Server Integration Services
    Thanks,
    Katherine Xiong
    If you have any feedback on our support, please click
    here.
    Katherine Xiong
    TechNet Community Support

  • [ESS] Line break in service description ??

    Hi,
    Strangely, even though I put line break in the description of services in view V_T7XSSSERSRV(C), it is not showing up in the Portal.
    A workaround is that I replace '.' with cl_abap_char_utilities=>newline in the proxy class I am using but this solution requires a proxy class (which is not always functionnally required).
    Does anyone have the same problem?
    Thanks in advance.
    Best regards,
    Guillaume

    Hi Guillaume,
    yes, i got the same problem, but when i want break line , i used to enter in Second line, since Service description is only a text editor ,as i think its not a ABAP editior where we can use break line statement to break line.
    hope it help,
    regards
    Vijai

  • CUP 5.3. Password Self Service description+sequence

    Hi together,
    where is it possible to chance the password self service description? The description appears on the "request access"-screen. Every requesttype can be changed in the configuration but not the password self service. Also it isn't possible to change the sequence...
    Is this correct? Or is there a other possibility I didn't notice?
    Thanks & Best Regards
    Alexa

    Hi,
    we are on SP8 but we will upgrade to SP09 in the next weeks...
    I need to change the desciption because of different language requirements for different users. Because the requestors aren't in the UME we can't make varieties in the language with the login. We enter the requesttypes in two languages in the description field. So that all requestors understand the different types. But this isn't possible for the password-self-service, because there is no configuration for the description.
    Perhaps my question will be solved with SP09
    Thanks.
    Alexa

  • Error committing 1 master and 2 children in one transaction

    I have a master-detail page (where master and detail views are both tables).
    If I insert a new master record (without committing it) and I then try to insert two child records I receive
    the following error when I commit the transaction:
    java.sql.SQLIntegrityConstraintViolationException: ORA-02291: Integriteitsbeperking (PUBLICSUITE.DREXTRL_LANG_DREXTRL_FK1) is geschonden - bovenliggende sleutel is niet gevonden.
    ORA-06512: in regel 1
         at oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:85)
         at oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:116)
         at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:177)
         at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:455)
         at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:413)
         at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:1034)
         at oracle.jdbc.driver.T4CCallableStatement.doOall8(T4CCallableStatement.java:191)
         at oracle.jdbc.driver.T4CCallableStatement.executeForRows(T4CCallableStatement.java:944)
         at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1222)
         at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3381)
         at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:3462)
         at oracle.jdbc.driver.OracleCallableStatement.executeUpdate(OracleCallableStatement.java:3877)
         at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeUpdate(OraclePreparedStatementWrapper.java:1349)
         at oracle.jbo.server.OracleSQLBuilderImpl.doEntityDML(OracleSQLBuilderImpl.java:434)
         at oracle.jbo.server.EntityImpl.doDML(EntityImpl.java:7779)
         at axi.casemanagement.model.base.CMEntityImpl.doDML(CMEntityImpl.java:62)
         at oracle.jbo.server.EntityImpl.postChanges(EntityImpl.java:6162)
         at oracle.jbo.server.DBTransactionImpl.doPostTransactionListeners(DBTransactionImpl.java:3253)
         at oracle.jbo.server.DBTransactionImpl.postChanges(DBTransactionImpl.java:3061)
         at oracle.jbo.server.DBTransactionImpl.commitInternal(DBTransactionImpl.java:2180)
         at oracle.jbo.server.DBTransactionImpl.commit(DBTransactionImpl.java:2382)
         at oracle.adf.model.bc4j.DCJboDataControl.commitTransaction(DCJboDataControl.java:1565)
         at oracle.adf.model.dcframe.LocalTransactionHandler.commit(LocalTransactionHandler.java:140)
         at oracle.adf.model.dcframe.DataControlFrameImpl.commit(DataControlFrameImpl.java:597)
         at oracle.adfinternal.controller.util.model.DCFrameImpl.commit(DCFrameImpl.java:83)
         at oracle.adfinternal.controller.activity.TaskFlowReturnActivityLogic.resolveTransaction(TaskFlowReturnActivityLogic.java:509)
         at oracle.adfinternal.controller.activity.TaskFlowReturnActivityLogic.execute(TaskFlowReturnActivityLogic.java:114)
         at oracle.adfinternal.controller.engine.ControlFlowEngine.doRouting(ControlFlowEngine.java:834)
         at oracle.adfinternal.controller.engine.ControlFlowEngine.doRouting(ControlFlowEngine.java:718)
         at oracle.adfinternal.controller.engine.ControlFlowEngine.routeFromActivity(ControlFlowEngine.java:491)
         at oracle.adfinternal.controller.engine.ControlFlowEngine.performControlFlow(ControlFlowEngine.java:108)
         at oracle.adfinternal.controller.application.NavigationHandlerImpl.handleNavigation(NavigationHandlerImpl.java:86)
         at org.apache.myfaces.trinidadinternal.application.NavigationHandlerImpl.handleNavigation(NavigationHandlerImpl.java:43)
         at com.sun.faces.application.ActionListenerImpl.processAction(ActionListenerImpl.java:130)
         at org.apache.myfaces.trinidad.component.UIXCommand.broadcast(UIXCommand.java:190)
         at oracle.adf.view.rich.component.fragment.UIXRegion.broadcast(UIXRegion.java:142)
         at oracle.adf.view.rich.component.fragment.ContextSwitchingComponent$1.run(ContextSwitchingComponent.java:70)
         at oracle.adf.view.rich.component.fragment.ContextSwitchingComponent._processPhase(ContextSwitchingComponent.java:274)
         at oracle.adf.view.rich.component.fragment.ContextSwitchingComponent.broadcast(ContextSwitchingComponent.java:74)
         at oracle.adf.view.rich.component.fragment.ContextSwitchingComponent$1.run(ContextSwitchingComponent.java:70)
         at oracle.adf.view.rich.component.fragment.ContextSwitchingComponent._processPhase(ContextSwitchingComponent.java:274)
         at oracle.adf.view.rich.component.fragment.ContextSwitchingComponent.broadcast(ContextSwitchingComponent.java:74)
         at oracle.adfinternal.view.faces.lifecycle.LifecycleImpl.broadcastEvents(LifecycleImpl.java:754)
         at oracle.adfinternal.view.faces.lifecycle.LifecycleImpl._executePhase(LifecycleImpl.java:282)
         at oracle.adfinternal.view.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:175)
         at javax.faces.webapp.FacesServlet.service(FacesServlet.java:265)
         at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
         at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
         at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
         at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
         at oracle.adf.model.servlet.ADFBindingFilter.doFilter(ADFBindingFilter.java:181)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
         at oracle.adfinternal.view.faces.webapp.rich.RegistrationFilter.doFilter(RegistrationFilter.java:85)
         at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl$FilterListChain.doFilter(TrinidadFilterImpl.java:279)
         at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl._invokeDoFilter(TrinidadFilterImpl.java:239)
         at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl._doFilterImpl(TrinidadFilterImpl.java:196)
         at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl.doFilter(TrinidadFilterImpl.java:139)
         at org.apache.myfaces.trinidad.webapp.TrinidadFilter.doFilter(TrinidadFilter.java:92)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
         at oracle.security.jps.wls.JpsWlsFilter.doFilter(JpsWlsFilter.java:102)
         at oracle.security.jps.ee.http.JpsFilter.doFilter(JpsFilter.java:65)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
         at oracle.adf.library.webapp.LibraryFilter.doFilter(LibraryFilter.java:149)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
         at weblogic.servlet.internal.RequestEventsFilter.doFilter(RequestEventsFilter.java:27)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
         at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3496)
         at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
         at weblogic.security.service.SecurityManager.runAs(Unknown Source)
         at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2180)
         at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2086)
         at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1406)
         at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
         at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
    Apparently it tries to commit the children before the parent.
    This error occurs only if you insert more than one child, if you insert just one everything works fine.
    Does anyone have a solution for this?
    I use JDeveloper Studio Edition Version 11.1.1.0.2, ADF Faces & Business Components.
    Edited by: Gert Leenders on 11-mei-2009 6:50

    John,
    We've actually read the subsection 34.8 titled "Controlling Entity Posting Order to Avoid Constraint Violations” but since
    we use standard BC functionality and we insert the master before the children we assume that the ADF BC Framework we do the insertion in the right order.
    Also because we have a “Composed” association we don’t need to override the postChanges() method in order to
    controls the order for saving the entities (regarding the documentation).
    *Due to the fact that it works fine if we just insert one child it seems to me that it’s more like a Bug.*
    If we indicate the association as “Composite” in JDeveloper then we receive the following error:
    SEVERE: Server Exception during PPR, #1
    oracle.jbo.InvalidOwnerException: JBO-25030: Kan entiteit met eigenaarsrechten niet vinden of ongeldig maken: detailentiteit ExterneRolTaal, rijsleutel oracle.jbo.Key[4195 null ].
         at oracle.jbo.server.EntityImpl.internalCreate(EntityImpl.java:1048)
         at oracle.jbo.server.EntityImpl.create(EntityImpl.java:811)
         at axi.casemanagement.model.base.CMEntityImpl.create(CMEntityImpl.java:17)
         at oracle.jbo.server.EntityImpl.callCreate(EntityImpl.java:921)
         at oracle.jbo.server.ViewRowStorage.create(ViewRowStorage.java:1109)
         at oracle.jbo.server.ViewRowImpl.create(ViewRowImpl.java:412)
         at oracle.jbo.server.ViewRowImpl.callCreate(ViewRowImpl.java:429)
         at oracle.jbo.server.ViewObjectImpl.createInstance(ViewObjectImpl.java:4453)
         at oracle.jbo.server.QueryCollection.createRowWithEntities(QueryCollection.java:1686)
         at oracle.jbo.server.ViewRowSetImpl.createRowWithEntities(ViewRowSetImpl.java:2194)
         at oracle.jbo.server.ViewRowSetImpl.doCreateAndInitRow(ViewRowSetImpl.java:2235)
         at oracle.jbo.server.ViewRowSetImpl.createRow(ViewRowSetImpl.java:2216)
         at oracle.jbo.server.ViewObjectImpl.createRow(ViewObjectImpl.java:9008)
         at oracle.jbo.uicli.binding.JUCtrlActionBinding.doIt(JUCtrlActionBinding.java:1227)
         at oracle.adf.model.binding.DCDataControl.invokeOperation(DCDataControl.java:2126)
         at oracle.jbo.uicli.binding.JUCtrlActionBinding.invoke(JUCtrlActionBinding.java:697)
         at oracle.adf.controller.v2.lifecycle.PageLifecycleImpl.executeEvent(PageLifecycleImpl.java:392)
         at oracle.adfinternal.view.faces.model.binding.FacesCtrlActionBinding._execute(FacesCtrlActionBinding.java:159)
         at oracle.adfinternal.view.faces.model.binding.FacesCtrlActionBinding.execute(FacesCtrlActionBinding.java:143)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:597)
         at com.sun.el.parser.AstValue.invoke(AstValue.java:157)
         at com.sun.el.MethodExpressionImpl.invoke(MethodExpressionImpl.java:283)
         at oracle.adf.controller.internal.util.ELInterfaceImpl.invokeMethod(ELInterfaceImpl.java:136)
         at oracle.adfinternal.controller.activity.MethodCallActivityLogic.execute(MethodCallActivityLogic.java:140)
         at oracle.adfinternal.controller.engine.ControlFlowEngine.doRouting(ControlFlowEngine.java:834)
         at oracle.adfinternal.controller.engine.ControlFlowEngine.doRouting(ControlFlowEngine.java:718)
         at oracle.adfinternal.controller.engine.ControlFlowEngine.routeFromActivity(ControlFlowEngine.java:491)
         at oracle.adfinternal.controller.engine.ControlFlowEngine.performControlFlow(ControlFlowEngine.java:108)
         at oracle.adfinternal.controller.application.NavigationHandlerImpl.handleNavigation(NavigationHandlerImpl.java:86)
         at org.apache.myfaces.trinidadinternal.application.NavigationHandlerImpl.handleNavigation(NavigationHandlerImpl.java:43)
         at com.sun.faces.application.ActionListenerImpl.processAction(ActionListenerImpl.java:130)
         at org.apache.myfaces.trinidad.component.UIXCommand.broadcast(UIXCommand.java:190)
         at oracle.adf.view.rich.component.fragment.UIXRegion.broadcast(UIXRegion.java:142)
         at oracle.adf.view.rich.component.fragment.ContextSwitchingComponent$1.run(ContextSwitchingComponent.java:70)
         at oracle.adf.view.rich.component.fragment.ContextSwitchingComponent._processPhase(ContextSwitchingComponent.java:274)
         at oracle.adf.view.rich.component.fragment.ContextSwitchingComponent.broadcast(ContextSwitchingComponent.java:74)
         at oracle.adf.view.rich.component.fragment.ContextSwitchingComponent$1.run(ContextSwitchingComponent.java:70)
         at oracle.adf.view.rich.component.fragment.ContextSwitchingComponent._processPhase(ContextSwitchingComponent.java:274)
         at oracle.adf.view.rich.component.fragment.ContextSwitchingComponent.broadcast(ContextSwitchingComponent.java:74)
         at oracle.adfinternal.view.faces.lifecycle.LifecycleImpl.broadcastEvents(LifecycleImpl.java:754)
         at oracle.adfinternal.view.faces.lifecycle.LifecycleImpl._executePhase(LifecycleImpl.java:282)
         at oracle.adfinternal.view.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:175)
         at javax.faces.webapp.FacesServlet.service(FacesServlet.java:265)
         at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
         at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
         at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
         at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
         at oracle.adf.model.servlet.ADFBindingFilter.doFilter(ADFBindingFilter.java:181)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
         at oracle.adfinternal.view.faces.webapp.rich.RegistrationFilter.doFilter(RegistrationFilter.java:85)
         at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl$FilterListChain.doFilter(TrinidadFilterImpl.java:279)
         at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl._invokeDoFilter(TrinidadFilterImpl.java:239)
         at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl._doFilterImpl(TrinidadFilterImpl.java:196)
         at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl.doFilter(TrinidadFilterImpl.java:139)
         at org.apache.myfaces.trinidad.webapp.TrinidadFilter.doFilter(TrinidadFilter.java:92)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
         at oracle.security.jps.wls.JpsWlsFilter.doFilter(JpsWlsFilter.java:102)
         at oracle.security.jps.ee.http.JpsFilter.doFilter(JpsFilter.java:65)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
         at oracle.adf.library.webapp.LibraryFilter.doFilter(LibraryFilter.java:149)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
         at weblogic.servlet.internal.RequestEventsFilter.doFilter(RequestEventsFilter.java:27)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
         at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3496)
         at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
         at weblogic.security.service.SecurityManager.runAs(Unknown Source)
         at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2180)
         at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2086)
         at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1406)
         at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
         at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)

  • Replication of product master and Product category

    Hi
    Please let me know steps involved in replicating
    Product master and product category from R/3 to SRM
    Regards
    Ashish

    Hi Ashish,
    The blogs mentioned by Kathirvel explains clearly the entire replication procedure of Master data from R/3 to SRM.
    Neverthless find some points which needs to be taken care during replication :
    1. Ensure that the tables CRMCONSUM, CRMSUBTAB, CRMPAROLTP, CRMRFCPAR are duly filled in R/3 system.
    2. Endure that SMOFPARSFA table is duly filled in SRM system.
    3. Maintain the filter settings using R3AC3 and R3AC1 as per your requirement in SRM system.
    4. Set up the Administrative console using SMOEAC and create both R/3 and SRM nodes and define the site attributes.
    5. Ensure that all the queues (both inbound and outbound) are empty in both the systems before replication.
    6. Start initial load of customizing objects (DNL_CUST_BASIS3, DNL_CUST_PROD0 & DNL_CUST_PROD1) using R3AS and monitor the loaded jobs using R3AM1.
    7. After successful replication of customizing objects, load the business objects (material_master, service master) using R3AS.
    8. The replicated material groups and product types can be seen using COMM_HIERARCHY txn.
    9. The replicated materials can be seen using COMMPR01.
    Hope this makes you more clear about the settings required.
    Clarifications are welcome.
    Award points for helpful answers.
    Rgds,
    Teja

  • Database link with the alias and full description in the connect string

    Hi,
    i have created database link with alias in tnsentry and full description
    and suppose i have removed the tnsnames.ora file what will be the impact on the database link that is whether database link will work or not i am sure db link created with alias won't work and how about db link created with full description and which one you prefer
    Thanks

    # Parameter file initora for Database prd
    ### Global database name is db_name.db_domain
    global_names = TRUE
    db_name = prd
    db_domain = world
    # TNSNAMES.ORA for prd ###############################
    prd.world = (DESCRIPTION = (ADDRESS = (COMMUNITY = tcp.world)
    (PROTOCOL = TCP) (Host = 100.10.100.1) (Port = 1521))
    (CONNECT_DATA = (SID = prd) (GLOBAL_NAME = prd.world)
    (SERVER = DEDICATED)))
    Our database link points from the local database test to the remote database prd. Therefore we need the global database name for prd. Ask the remote database administrator for these information or connect to prd and execute the following query on prd:
    SQL> select GLOBAL_NAME from GLOBAL_NAME;
    GLOBAL_NAME
    prd.WORLD
    CREATE DATABASE LINK prd
    CONNECT TO system IDENTIFIED BY system_passwd
    USING 'prd';--- alias
    so the connection description will be ---select ename from [email protected]
    useful link
    http://www.akadia.com/services/ora_dblinks.html

  • Difference between Item created under Material Master and Equipment

    Hello Expert,
    I am bit confused in Item created in Master and Equipmet:
    as per my understanding : Item created in Item master belong to some BOM in BOM defines the structure for the Item.
    Then what is exactly served b equipments?
    The equpments are serialied Items with some different structure or they are derived from BOM?
    Please guide me to understand this concept.
    br,
    Pushkar

    Dear,
    Check what I found:  material master is used to describe an item, usually stockable,  that is purchased, manufactured, sold, inspected, serviced, etc.  A material master is usually created when there are multiple quantities of an item, say a product that is being manufactured.
    A material master may have a serial number profile entered in order to allow distinguishing between the different copies of the same item.   For stocking,  inspecting, sales, manufacturing, etc;  the exact serial number(s) being transacted will need to be specified.
    Depending  on the serial number profile specifics, each SN may get a unique equipment master, which records information specific to the material-SN combination (not just about the material in general).  For instance, if a product sold to a customer is returned for service, that material-SN (=equipment master) will have a record of the service order used.
    For items which are never inventoried but for which information particularly PM transactions are recorded, an equipment master without a material master may be created, for instance a piece of processing equipment.
    So an equipment master may link to a material-SN combination, or may be a stand-alone.
    Please check : http://www.sap-img.com/pm003.htm
    Regards,
    Syed Hussain.

  • Master and Slave concept

    hi guys,
    i am confused about the master and slave realtionship in ospf. any one can explain it to me in detail.

    Hello Pankaj,
    The Master and Slave in OSPF are a slightly confusing concept but the idea behind it is quite simple.
    When two routers decide to become fully adjacent, they must synchronize their LSDBs. OSPF tries to optimize this: the routers first exchange only the list of entries in their LSDBs. Each router compares the received list to the list of items in its own database, and if it finds that an LSA is missing or is older than one the neighbor knows about, it requests it from the neighbor afterwards. This way, both routers transmit only the missing or updated LSAs, not the entire LSDBs.
    The list of LSDB entries is carried in Database Description (DBD) packets. Naturally, when routers exchange DBD packets, they must be sure all of them have been properly received by the neighbor, so some sort of acknowledgements must be used. There is a problem here, however: the only packets in OSPF used to carry acknowledgements are LSAck packets, but they can only be used to acknowledge LSU packets (more precisely, individual LSAs carried in LSU packets), not DBD packets. How shall the acknowledgements of DBD packets be accomplished, then?
    OSPF uses a polling style of communication with the DBD packets. DBD packets themselves have sequence numbers used for sequencing and acknowledgement purposes. One of the two routers that are in the synchronization phase will be the one responsible for polling the other (i.e. calling it out it to send another piece of information if it has any), each time with an incremented sequence number. This is the Master role. The other router will only be allowed to respond to a DBD poll, never send any DBD packet without being polled immediately before, and the response DBD packet must carry the sequence number of the Master's DBD poll packet. This is the Slave role. The Slave must respond to each Master's DBD packet even if it has no more LSDB entries to advertise; in that case, the DBD response body will be empty.
    So during the DBD exchange, the Master sends DBD packets to Slave, incrementing the sequence number by one in each round. The Slave waits for DBD packets from the Master and only responds to them, and each response carries the sequence number from the last received Master's DBD packet that was used to poll the Slave. Remember: a Slave must not send DBD packets on its own, only as responses to DBD packets received from the Master, and the sequence number of the Slave's response DBD packet must be set to the Master's poll DBD packet.
    While I call the DBD packets as "polls" and "responses" here for the sake of clarity, the DBD packets do not have this distinction indicated explicitly. Any DBD packet sent from the Master, either with a body carrying a list of LSAs or an empty body, is a poll. Any DBD packet sent from the Slave, again either with a non-empty or empty body, is a response. A DBD packet can have an empty body if the router needs to send a DBD packet to the neighbor (either from Master to repeatedly poll the Slave, or from the Slave to confirm the arrival of the DBD packet from the Master) but has no more LSDB entries to advertise itself.
    There are two issues with this simple procedure. First, there is the issue of who out of two synchronizing routers will be the Master and who will be the Slave. This is resolved during the ExStart phase: both routers initially treat themselves as Master routers, and send DBD packets with random initial sequence numbers to each other, indicating the MS flag (Master) in their header. As they do this, the router with the lower RouterID moves to the Slave role, while the router with the higher RouterID remains in the Master role. The ExStart phase is basically finished after establishing the router's role in the synchronizing pair; at most two DBD packets are needed for that, one from each router. The Exchange phase then lasts until routers have exchanged the entire list of their LSDB entries using DBD packets.
    The second issue is more subtle: how should the Master know whether the Slave needs to be polled further? Clearly, a situation may arise when the Master's LSDB is empty or smaller than the Slave's, and the Master will need to send fewer DBDs than the Slave to list all its LSDB contents. As the Slave can not send a DBD packet on its own, it somehow needs to tell the Master to poll it again. This is accomplished by another flag in DBD packet header, the M (More) flag. If a Slave replies to the Master's DBD packet with its own DBD packet and the M flag set, the Master knows the Slave needs to be polled again. The Master will stop polling the Slave after the last DBD packet fom the Slave has the M flag cleared.
    The RFC 2328 has a nice ASCIIart graph of the adjacency coming up:
                +---+                                         +---+
                |RT1|                                         |RT2|
                +---+                                         +---+
                Down                                          Down
                                Hello(DR=0,seen=0)
                           ------------------------------>
                             Hello (DR=RT2,seen=RT1,...)      Init
                           <------------------------------
                ExStart        D-D (Seq=x,I,M,Master)
                           ------------------------------>
                               D-D (Seq=y,I,M,Master)         ExStart
                           <------------------------------
                Exchange       D-D (Seq=y,M,Slave)
                           ------------------------------>
                               D-D (Seq=y+1,M,Master)         Exchange
                           <------------------------------
                               D-D (Seq=y+1,M,Slave)
                           ------------------------------>
                               D-D (Seq=y+n, Master)
                           <------------------------------
                               D-D (Seq=y+n, Slave)
                 Loading   ------------------------------>
                                     LS Request                Full
                           ------------------------------>
                                     LS Update
                           <------------------------------
                                     LS Request
                           ------------------------------>
                                     LS Update
                           <------------------------------
                 Full
    The I flag here is another flag in DBD headers called the Init flag, and is set only on initial DBD packets in the ExStart phase. If the router has established its Master or Slave role, it clears the I flag. This one is not really that important right now.
    The Master/Slave relationship is built and relevant only during the initial LSDB synchronization when a new adjacency is being established. After the two routers go past the Exchange state, DBD packets are not used anymore, and the whole Master/Slave relationship is forgotten. Remember: Master/Slave is relevant only to DBD packets, and DBD packets are used only in ExStart/Exchange phases. Outside of these states, there are no DBD packets used, hence no Master/Slave relationships exist.
    If there are, say, four routers, R1 till R4, connected to the same switch and run OSPF, during the OSPF bootup, there will be 5 temporary Master/Slave relationship built and torn down afterwards:
    between the DR and BDR as they synchronize (assume those routers are R1 and R2)
    between R3 and DR
    between R3 and BDR
    between R4 and DR
    between R4 and BDR
    Notice the Master/Slave relationship existed between those routers that went through ExStart and Exchange into the Full state. Also keep in mind that in the Full state, there are no more Master/Slave relationship present - they were only needed because of the specific needs of the DBD packet exchange.
    Does this make the issue a little more clear? Please feel welcome to ask further!
    Best regards,
    Peter

Maybe you are looking for