Conceptual questions

hello experts,
since i am new to MDM, there are a few doubts i would like to clear
these apply to particular scenarios
1) I have two different R/3 systems
two records are present, one in each system
in the real world, both these refer to the same data
they are merged in MDM - some attributes are changed in both the records
now, what happens on syndication ?
eg: if the names were different originally, what gets reflected back ?
2) I have one R/3 system
it has two different records which are similar in the real world
these records are matched and merged in MDM
what happens on syndication ?
will there be one record or two and what will be the key field of this record(s) ?
3) I have one R/3 system
suppose i send 100 MATMAS IDocs to MDM via XI
a. suppose I create 20 more matmas records, can i only send these 20 at a later point of time, if so, how do i choose these 20 ?
b. suppose I change some of the original 100 records in R/3 before MDM syndicates them back, will there be inconsistency when i syndicate those old records back ? If not, how does SAP avoid it ?

Kris,
When you are trying to merge two records from systems A and B, there is every chance that some attributes in the record from A will differ from those in the record from B. In such a case the concept of the "System of Record" (SOR) will come into play. Your data governance process might be set up such that attribute F1 from system A is always correct (A is the SOR for F1) and attribute F2 from system B is always correct (B is the SOR for F2). So, when you are merging your records from the two systems the value of F1 in the record from A will "survive" and the value of F2 in the record from B will "survive" in the merged record. In the situation that you have described in 3b, your governance process may be set up such that R/3 is the SOR for classification data and MDM is the SOR for everything else. In such a case, if  there is any change to classification data in R/3, your process should be set up so as to overwrite any changes to the classification data in MDM with the R/3 data. If there is any change to any other type of data in R/3 then you don't bring that over to MDM. You can setup XI to do a lot of this activity.
To cut a long story short, the easiest way to handle such cases is through an airtight governance process where you identify what the SOR for every attribute is and then don't let that attribute be overwritten if it is from the SOR.
Regarding #1, I am not very sure I understand the question. If MDM is simply being used for consolidation of your master data records, then your transactional data is still in the R/3 systems and you are not deleting any master data from those. Are you asking about things like changes to a "bill to" address in MDM and that change being syndicated?

Similar Messages

  • PI conceptual question

    Good afternoon:
    We are currenlty moving into SOA and we'd like to use Netweaver as our ESB but I have a conceptual question right now:
    - If my applications consume the web services provided or registered in the Services Registry, will I be using the Process Integrator implicitly??
    We want to register web services and use the web services registered in the Services Registry of Netweaver, but licensing for the PI es really expensive...
    Thanks for any hints...

    >  If my applications consume the web services provided or registered in the Services Registry, will I be using the Process Integrator implicitly??
    Yes, To consume or host webservice we can use PI middleware. PI supports also service registry to register your webservice for others.
    >> We want to register web services and use the web services registered in the Services Registry of Netweaver, but licensing for the PI es really expensive...
    PI 7.3 has plenty of cool features and it is claimed as SOA Middleware. Comparitively PI licensing cost is better than competitors too.

  • Conceptual questions with document management and Apex:

    Hello Everyone,
    I have reviewed or participated in thread discussions focusing primarily on subject matters concerning text editors, spellcheckers and document printing. The reason for this is due to our client requesting the creation of a basic centralized document management system that will enable users to create, edit and print technical documents in a database centric web-based environment. The caveat is that the client would like the same basic functionality that users get from MS Word. I know about FCKeditor or TinyMCE and their associated spellcheckers. What concerns me is that I have not found a possible plug-in to handle tracking changes, no one wants to re-read a large multiple page document again when all they would rather do is just view the changes. I know there are possible database schemas that might facilitate this type of functionality; I am just hoping it is more of a plug-in function.
    So with all that being said my dilemma is how to approach the design of such an application using Apex, if that is possible. Some questions I have are:
    1. Do design the application where you have a text field that contains the entire document, which could be as many as 25 or more pages?
    2. Or do you break down the document in to multiple text fields and then assimilate them in to a single multi page document when printing?
    3. Would you store the document data using XML under condition 1, 2, both or not at all?
    4. What types of data tables might exist, such as tables for document templates, work-in-process and final documents or something else?
    I know there are a lot of other concepts/questions to consider and a large part of the design approach would be based on client requirements. My goal here is to gather different basic conceptual approaches, from forum members, in order to help facilitate a starting point for the project.
    By the way I have seen on the Apex Latest Forum Poll, for quite sometime, where Document Management is an application that people would like to see developed. Can anyone from the Apex-team tell me if it is in the works and if so, when?
    Thanks, in advance, for any suggestions.
    Kyle

    Hey Chet,
    Thanks for the response; actually I had visited the sample package apps. site awhile back and did not realize more had been added. My problem is that I use Apex 2.1 and not 2.2, so unless there is a way to load the package apps. to the Oracle hosted site, I won't be able to review there design. It would be nice if Oracle tied these package apps. to their demonstration applications sample downloads function in Apex.
    As for storing each line of the document in a single record, this was thought of as an initial approach. A concern by the team was how to program the logic to identify specific changed text in say a 5 sentence paragraph and how large the table would become if recording it line by line.
    It is still a good approach to consider and we appreciate the input.
    Thanks
    Kyle

  • Important conceptual question about Application Module, Maximum Pool Size

    Hello everyone,
    We have a critical question about the Application Module default settings (taking the DB connections from a DataSource)
    I know that on the Web it is generally suggested that each request must end with either a commit or rollback when executing PL/SQL blocks "directly" on the DB without the framework BC/ViewObject/Entity service intervention.
    Now, for some reasons, we started to develop our applications with thinking that each Web Session would reference exactly one DB session (opened by any instance taken from the AM pool) for the whole duration of the session, so that the changes made by each Web session to its DB session would never interfere with the changes made by "other" Web Sessions to "other" DB sessions .
    In other words, because of that convincement we often implemented sort of "transactions" that open and close (with either commit or rollback) each DB session not in/after a single HTTP request, but during many HTTP Requests.
    As a concrete example think of this scenario:
    1. the user presses the "Insert" button. An HTTP request is fired. The action listener is executed and ends up with inserting rows in a table via a PL SQL block (not via the ViewObjects API).
    2. no commit or rollback after the above PL/SQL block is done yet.
    3. finally the user presses a "Commit" or "Rollback" button, firing the call to the appropriate AM methos.
    Those three requests consist of what I called "transaction".
    From the documentation it's clear that there is no guarantee that the couple AM istance + DB session is the same during all the requests.
    This means that, during step 2, it's possible that another user might reference the same "pending" AM/DbSession for his needs and "steal" somehow the work done via PL/SQL after step 1. (This happens because sessions taken by the pool are always rolled back by default.)
    Now my question is:
    Suppose we set the "Maximum Pool Size" parameter to very a great number (always inferior to the maximum number of concurrent users):
    Is there any guarantee that all the requests will be isolated in that case?
    I hope the problem is clear.
    Let me know if you want more details.

    Thanks for the answers.
    If I am right, from all your answers about resource avaiability, this means that even supposing the framework is able to always give us the same AM instance back from the AM pool (by following the session-affinity criterias), there is, however, no "connection affinity" with the connections from the DataSource. This means that the "same AM instance" might take the "a new DB connection", if necessary, from the connection pool of the DataSource. If that happens, that could give us the same problems as taking "a new AM instance" (that is, not following session-affinity) from the beginning, since each time an a new connection is taken (either via a new AM instance or via the same AM instance plus a new DB connection), the corresponding DB session is rolle back by default, clearing all the pending transactions we might have performed before with direct PL/SQL calls bypassing the AM services during the life cycle of our application, so that the new HTTP request will have a clean DB session to start to work with.

  • ALE Configuration Conceptual Questions

    Hi Experts,
    I need some help regarding the ALE configurations.
    1. I know that we specify the RFC dest in the port and then specify the port in partner profile. What exactly is the significance of a port in case of outbound Scenario? we could have directly specified the RFC destination in the partner profile.
    2. When we create the TCP/IP RFC destination it creates a TRFC Connection. However in the special options tab we have the option select QRFC version? What is the use of that?
    3. Why does the port created is of TRFC for ALE? Is it that the RFC connection is of TCP/IP so we need a TRFC port?
    4. Why do we need to create TCP/IP rfc destination for ALE, why not HTTP RFC destination?
    5. Is distribution model is mandatory for all the ALE scenarios? If not then when its mandatory?
    6. While creating process code we have option process with/without ALE service. What does that exactly means?
    Thanks
    Kumar
    Moderator message: please search for available information/documentation, do not ask interview-type questions.
    Edited by: Thomas Zloch on Apr 19, 2011 7:01 PM

    hi,
    ALE/ IDOC
    http://help.sap.com/saphelp_erp2004/helpdata/en/dc/6b835943d711d1893e0000e8323c4f/content.htm
    http://www.sapgenie.com/sapgenie/docs/ale_scenario_development_procedure.doc
    http://edocs.bea.com/elink/adapter/r3/userhtm/ale.htm#1008419
    http://www.netweaverguru.com/EDI/HTML/IDocBook.htm
    http://www.sapgenie.com/sapedi/index.htm
    http://www.sappoint.com/abap/ale.pdf
    http://www.sappoint.com/abap/ale2.pdf
    http://www.sapgenie.com/sapedi/idoc_abap.htm
    http://help.sap.com/saphelp_erp2005/helpdata/en/0b/2a60bb507d11d18ee90000e8366fc2/frameset.htm
    http://help.sap.com/saphelp_erp2005/helpdata/en/78/217da751ce11d189570000e829fbbd/frameset.htm
    http://www.allsaplinks.com/idoc_sample.html
    http://www.sappoint.com/abap.html
    http://help.sap.com/saphelp_erp2004/helpdata/en/dc/6b835943d711d1893e0000e8323c4f/content.htm
    http://www.sapgenie.com/sapgenie/docs/ale_scenario_development_procedure.doc
    http://edocs.bea.com/elink/adapter/r3/userhtm/ale.htm#1008419
    http://www.netweaverguru.com/EDI/HTML/IDocBook.htm
    http://www.sapgenie.com/sapedi/index.htm
    http://www.allsaplinks.com/idoc_sample.html
    Regards,
    S.Nehru

  • CTM planning related conceptual questions

    In an attempt to understand the CTM functionality for master planning I have the following  questions
    1.In real implementation where is the safety stock planning done by the standard methods and the where are the service levels considered.
    2.when is the lot for lot/fixed lot/reorder point and by periods?
    3.when is the planning mode strategy like the replan all orders or orders with fix pegging used. is it based on the replenishment lead times of the products?
    4.when is the delete only the un firmed orders used?
    I understand that it is industry specific but an explanation with reference to the industry would be helpful. idea is to understand the business logic behind this configuration.

    typically service level based Safety Stock planning would be used in situations where replenishment leadtimes are variables. there could be SLAs with the supplier as well. standard Safety stock planning would be used under relatively static conditions of leadtimes. One would use days' of cover in situation where net requirements could be varying a lot with time. One could also use quantity based static Safety stock quantity, in situations where the product has high volume net requirements. One could use time-phased quantity safety stock/days' of supply where conditions are dynamic throughout the year in terms of demand and supply.
    Regenerative planning is to do planning from scratch by getting rid of all unconfirmed and confirmed planned orders/requisitions. Non regenrative planning would be doing incremental supply planning based on increased/decreased demand.
    Lot for lot will be used when the planned order quantities do not have any size restrictions. so they may go from a certain min lot size to a max lot size. So there is just one planned order to meet the demand. In fixed lot size, there are possibilities of multiple planned orders of same size to meet the demand. Reorder point method will mean that planned orders get created when stock and scheduled receipts fall short of an expected level of projected stock.

  • Hi some conceptual questions

    hi all
    i am new in this forum and a pl/sql developer and some knowledges of storage structure of oracle.Now i want to learn dba portion including backup etc..
    my questions are--
    1.what is nomount and mount state of a database.
    2.what is rman why we use it though we can recover database through some scripts.
    3.what is archivelog mode
    4.cold backup and hot backup
    5.how recovery is done when control file is lost ora database crashes
    pls help me

    Wow............
    So simple Questions you asked
    you are telling you are new to PL/dev and asking core Administration part.
    If you want to know all these please go through admin/recovery books.
    don't ask these Questions before self search and reading

  • Primavera - Conceptual Questions

    Hi,
    I am planning the environment and deployment strategy for Primavera EPPM and have the following questions:
    1. Coming from a Siebel CRM background, there is a concept of customisation (application code) and reference data. Does the same exist for Primavera?
    2. What is the process around pushing the configuration or code updated against an instance of Primavera EPPM from the development environment to another environment (such as testing environment)?
    Any response is appreciated or even pointing to the correct documentation. I have already glanced over Primavera P6 Documentation Centre.
    Thanks.
    ML

    Hey Chet,
    Thanks for the response; actually I had visited the sample package apps. site awhile back and did not realize more had been added. My problem is that I use Apex 2.1 and not 2.2, so unless there is a way to load the package apps. to the Oracle hosted site, I won't be able to review there design. It would be nice if Oracle tied these package apps. to their demonstration applications sample downloads function in Apex.
    As for storing each line of the document in a single record, this was thought of as an initial approach. A concern by the team was how to program the logic to identify specific changed text in say a 5 sentence paragraph and how large the table would become if recording it line by line.
    It is still a good approach to consider and we appreciate the input.
    Thanks
    Kyle

  • Conceptual questions on the SDK: Dev Best Practices

    Hello ByD Community,
    As I am not a master in development, I have interrogations on many things regarding how I should do my development in the SDK.
    First, for my first question I will take an example:
    BO Extensions:
    - Imagine you already have extended one of your Business Object (Purchase Order) with error messages, a new field in one screen, etc.. for a specific solution.
    - Now, you need to do another work for the same customer in the same Business Object
    * What should we do?:
              - Create a new Customer-Specific solution with a new extension of the Purchase Order BO? Or will this create issues between the two extensions created? Actually is it possible to create as many BO Extensions for the same BO in the same tenant?
              - Or we should go in the already done BO Extension and do the thing you need and add them to the scripts you already did for the first solution?
    Second issue:
    Where the changes will apply when we extend a Business Object contained in various screens?
    My second issue is that I have problems understanding how all the screens in ByD are separated according to the BO in the Repository explorer. Finding the screens in the UI designer is easy, but I have problems understanding how the repository is working sometimes.
    - Imagine you want to make a field mandatory in a specific screen, the Product Category ID in Purchase Order for example. So you go and you write your scripts in the Purchase Order Extension BO created. And you choose the ProductCategoryID which is in the PurchaseOrder BO from the repository explorer
    - How are we sure that the ProductCategoryID becomes mandatory only in the Purchase Order creation screen and not in other screens as we never specify for which screen we want this message? This is the concept I have the more problem getting into. How my Business Logic developed in the SDK only affects some screens and not others?
    - Finally, can we assume that this PurchaseOrder.Item.ProductCategoryID (not the real path just for example) is a different element that PurchaseRequest.Item.ProductCategoryID? So as if we make changes to the PO.ProductCategoryID, it won't affect the PR.ProductCategoryID.
    Thank you very much for your help and for your guidance to improve my understanding.
    Best regards!
    Jacques-Antoine

    Hello Jacques-Antoine,
    Your first question: "another work for the same customer in the same Business Object"
    If this belongs to the same project it should be part of the same solution.
    If this is a new project then create a new solution
    You can create as many solutions as you want and expand in these solutions the same BO again and agin.
    Second question: "extend a Business Object contained in various screens"
    If you extend the BO it si valid in all screens where this BO is used.
    So setting a element as mandatory will be applied to all UIs.
    If you want to have this only for a specific UI, you must enhance this UI
    HTH,
       Horst

  • OID External Authentication Plugin - Conceptual question

    Hi-
    Does anyone know the answer to this: If I enable the External Authentication Plugin for OID (to AD) does that mean that if I have any accounts in OID which do not exist in AD, they won't be able to authenticate?
    Also, if anyone knows of some conceptual documentation on this, please let me know. All I could find was how to install it, but not how it works. (do I need to match users on CN or uid or what?)
    Thanks

    Hi,
    Once you are done with user accounts synchorinzation successfully using dipassistant tool from edirectory to OID. Inorder to update/flush the user accounts password that which are synchronized to OID, in such case OID eDirectory External Authenctiation plugin will be used (oidspediri.sh file) located under <ORACLE_HOME>ldap/admin. Provide th neccessary eDirectory Details.
    Regards,
    ABP

  • Conceptual Question-Standby Database

    Hi all,
    I was just going through this documentation
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/log_apply.htm
    In the beginning it says
    "Log apply services automatically apply redo to standby databases to maintain synchronization with the primary database and allow transactionally consistent access to the data"
    I want to know How does process takes redo log and apply to datafile?What is basically happening inside?
    Cheers,
    Neerav

    Dear Neerav999,
    If i would know your Oracle database version, i could give you a more detailed explanation but i will give it anyway;
    There are options that you can use for the Oracle Data Guard. Data Guard is used for replicating the primary database to the standby database via the archivelogs and/or with the redologs. You may not have a standby redolog for maximized performance protection level and it is not mandatory however you need to have standby redos for maximized availiability and protection, protection levels of the Oracle database.
    Starting from 11g, you can able to cancel the recovery process on standby site and open the standby database read only than (magic goes here) you can start the media recovery! The command list as follow for 11g;
    ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL --> This command will cancel the MRP0 process and will stop the continuous media recovery
    ALTER DATABASE OPEN READ ONLY
    ALTER DATABASE RECOVER MANAGED STANDBY DATABASE [USING CURRENT LOG FILE] DISCONNECT FROM SESSION;
    --> Above command will start the media recovery process ON THE READ ONLY DATABASE!You can give it a try that one by creating a dummy table on primary and wait for one or two log switches and check back the standby database. You are going to see the table has been also created there.
    So answer to your eternal question! :)
    MRP process is there to fetch and request the archivelogs from the primary site. This process can be started with the below command;
    ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;You may have a simple test of cource. Cancel the MRP process and switch some logfiles on primary site. Than rerun above command to restart the MRP process and check v$dataguard_status fixed view. You will see that MRP is going to try to fetch the missing archivelog files and will try to close down the gap between the primary and the standby database. Here let me show you another 11g feature of Data Guard.
    On 11g you can compress the redologs! What do you need to compress it? There is maybe the most threatning problem of the Data Guard is the connection itself. It should be fast and reliable. Otherwise the MRP process will fall behind and if you are running on maximum performance mode, it will also decrease the performance of the primary database and will hardly close the gap. You 11g Data Guard does is it compresses the log files in the fetching period. If its done with fetching and closed the gap than the normal transmission without the compression will continue to operate. Here is an example of redo compression parameter;
    alter system set log_archive_dest_n = 'service= ORCLSTBY
    LGWR ASYNC valid_for=(ONLINE_LOGFILES,PRIMARY_ROLE)
    db_unique_name=ORCLSTBY compression=enable'I have checked the link that you have given and its Oracle 10g but i have just written it to understand some real factors between the 10g Data Guard and the 11g Data Guard. There are also other changes but if you can go to the 11g New Features site you will see Arup Nanda's posts about it.
    If you are to use the standby redologs and it is called "Real Time Apply" and you can see it on v$dataguard_status fixed view. Here is the information from the online documentation;
    6.2.1 Using Real-Time Apply to Apply Redo Data Immediately
    If the real-time apply feature is enabled, log apply services can apply redo data as it is received, without waiting for the current standby redo log file to be archived. This results in faster switchover and failover times because the standby redo log files have been applied already to the standby database by the time the failover or switchover begins.
    Use the ALTER DATABASE statement to enable the real-time apply feature, as follows:
        *For physical standby databases, issue the ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE statement.
        *For logical standby databases, issue the ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE statement.
    Standby redo log files are required to use real-time apply.Here is a fixed view that you may see the processes;
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/dynviews_1169.htm#REFRN30144
    PROCESS      VARCHAR2(9)      Type of process whose information is being reported:
        *RFS - Remote file server
        *MRP0 - Detached recovery server process
        *MR(fg) - Foreground recovery session
        *ARCH - Archiver process
        *FGRD
        *LGWR
        *RFS(FAL)
        *RFS(NEXP)
        *LNS network server processAll above indications are most verbose and your answer to your question resides here but you need to read it carefully to understand how Oracle applies log files.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/manage_ps.htm#i1009613
    Regards.
    Ogan

  • OIM - Conceptual question

    Hi All
    I have some confusion about parent table and child table w.r.t to resource provisioning. Say for example, if I am using the AD connector, then I will have a main parent table for AD RO and a child table for AD Groups. When I try to provision a user, I populate the parent table data on the process form and then select a particular group and attach it to parent table, a user gets provisioned to AD with that group. This will trigger two process tasks as below:
    1. Create AD user
    2. Add to AD group
    I want to know that how OIM knows when it has to call the second task and whether it has to call the 2nd task or not. Where in OIM can I see this linkage.
    Is it something like if there is a row populated in child table, it automatically calls the second task. Is there any way, I can see this linkage or is it internal to OIM.
    Please let me know if anyone has idea about this.
    Thanks

    I guess you have already answered your question ..,
    If you see your process task Add user to group and look up for some thing like Child table and trigger type (down left corner) . When ever you add a child data from web app , A row would be inserted in the child table and this task has a mapping that when ever a insert opertion happens in this child table , invoke me . So its invoked .
    Create user task is called as its the only non conditional task in the process definition . All non conditional task would be invoked
    Thanks
    Suren

  • BI Conceptual Questions

    Hi ,
    I have few simple query...
    Q1.Client manager wants notification via e-mail with a single file attached that shows sales variance greater than 15% by customer by region.Out of following four,which options I can use and why....?
    1)Independent HTML file particularly for e-mail
    2)Filtering using control query
    3)HTML  with separate MIME files but no zip file
    4)Only pre calculate lowest alert level
    Q2 Infoprovider-data can be linked to many cells on the report output (True/False).
    Q3) Other than data brusting,is there any other way to send the message to Non-sap users from BI?
    Q4)The base Price of a product should be used for additional calculations in reports. How should you map these reporting requirements in the data model?(Choose the valid options)
    1)The keyfigure u201Cbase priceu201D must be included in the fact table of an infocube, otherwise no further calculations can be performed.
    2)The keyfigure u201Cbase priceu201D can be stored as an attribute  in the master data for the characteristic u201Cproductu201D. In the query definition, the base Price is made available by a formula variable with replacement path for further calculations.
    3)The keyfiure base price can be stored as a navigation attribute in the master data for the characteristic u201Cproductu201D .Then it is available for reports.
    4)The keyfigure u201Cbase priceu201D can be included in the fact table of an infocube, even if only actual sales price and the ID of product  are available when the transaction data is loaded. The base price is read in the update rule or transformation from the master data that was  loaded previously
    Waiting for your valuable replies..
    Thanks,
    Baljindra!!

    Rakesh,
    Information on information broadcasting is available on help.sap.com and I am sure you can try out these in your system and also a lot has been discussed about information broadcasting.
    Please search the forums before posting.

  • Web Service With Dynamic URL (Very Basic Conceptual Question)

    Hi everyone,
    I would like to employ JAX-WS to generate and publish a web service along with a web-based client which uses the service. The problem is: I want to deliver both the server (with its service) and the client to a customer, who will install the server on an internal machine, and who will have to configure the client to look for the web service at the IP of that internal machine, ideally by just putting the IP into some configuration setting. Both the services' path (on the server) as well as the services remain constant.
    From everything I have found so far, it appears as if one fixes the service's IP during compilation, i.e., when generating the WSDL and stubs using wsgen/wsimport. I guess that's fine when the server remains with me & a fixed IP, but doesn't this approach break down as soon as you need some flexibility in the server's IP, as, e.g., in my above scenario? I guess I am missing something, but unfortunately all the documentation I have found so far either neglects this issue or comes up with rather complex solutions indeed. - Or would one not use SOAP+WSDL in the above scenario in the first place? Any other best practices?
    I'd very grateful for any hints,
    Cheers
    equitone
    Message was edited by:
    Equitone

    Hi,
    thanks for your reply. Of course, I agree I could alter the generated code, but, as you say, I would not want to do that, since it will make automatic builds and deployment rather complicated.
    I guess my expectations on the "flexibility"/"design" of some of the generated artifacts is just a bit skewed, and I'll have to find a compromise between my expectations and the Java 6 way of web services. For instance, I also find it somewhat annoying that wsgen will apparently only generate the service against an implementing class, not an interface. E.g., when I have the following web service interface MyService and an implementing class MyDummyService:
    @WebService
    public interface MyService
        @WebMethod
        public String getString();
    @WebService(endpointInterface = "com....MyService")
    public class MyDummyService implements MyService {
       public String getString() {
          return "Just a dummy.";
    }Then the service in the WSDL specifies its Port as "MyDummyServicePort", so if I ever change the implementation---without actually changing the (web service) interface, I will also have to regenerate all client stubs. I would have hoped that wsgen's "service" option would help in this regard, but apparently it doesn't. Unfortunately, the documentation on the whole issue is in a pitiful state, IMHO.
    Cheers
    Equitone

  • Some conceptual questions??

    1- what is the purpose of java interfaces ... Dont you think that with only abstract classes we can build an application. In that case we dont require interfaces
    2- what if java interfaces were not there, would a complex java application be possible ?
    3- Suppose there are no interfaces in java, which functionality in particular will be missing?
    4- why abstract classes? As we cannot instantiate abstract classes then what is the purpose behind all this..
    5- Is overloading a form of polymorphism? If yes, why? Because concept of polymorphism is, i believe, only possible in inheritance.

    Hi Ali,
    1- what is the purpose of java interfaces ... Dont you think that with only abstract classes we can build an application. In that case we dont require interfaces
    Interfaces in concept are contracts between tow entities, it means that one entity says to the other if you want to speak to me you must speak in this language. So the other entity will speak in that language and chooses what words to say based on the situation.
    In the Java world it means that one entitiy will require the other to implement the given interface so it will be sure that its contract (set of methods) will not be violated and the implementor entity is free in how to provide the implementation for these methods.
    4- why abstract classes? As we cannot instantiate abstract classes then what is the purpose behind all this
    Abstract classes are used for the same reason as Interfaces (it is a contract) but it has some more restriction, Meaning that it might provide concrete implementations for parts of that contract sothat the implementor (or in this case the extending class) can provide implementations only for the rest of the contract.
    Abstract classes can't replace interfaces, remember in java you have single-inhertance, so if you extended the abstract class you wasted your opportunity and you can't extend another one.
    5- Is overloading a form of polymorphism? If yes, why? Because concept of polymorphism is, i believe, only possible in inheritance.
    Yes, polymorphism means you have diffrent METHODS to perform the same FUNCTION.
    So assume you have a function named getPrice and you can return the price in USD or GBP, then you need to have 2 overloaded methods, one takes no-args and return USD and the other takes a string arg and returns price in given currency.
    Hope this will help you understand the concepts.
    Only by helping each other, we can grow

Maybe you are looking for

  • File Open problem

    After updating to the most recent security I have the following problems with Text edit and MS Word. I cannot use File Open to access a file. I get the spinning ball and application not responding. I can double-click the text or Word file and they wi

  • How to create an AQ table

    Hi, I run this sql but I got an error: BEGIN DBMS_AQADM.CREATE_QUEUE( Queue_name => 'BING.IP_IN_QUEUE', Queue_table => 'BING.IP_QTAB', Queue_type => 0, Max_retries => 5, Retry_delay => 0, dependency_tracking => FALSE); END; Note: The schema is BING a

  • SEM Measure Builder - dataSource values are not appearing in measures

    Dear Experts we recently upgraded our SEM system from 4.0 to 6.0. I am getting few problems in measure builder catelouge entries. when i checked the measures in measure catalouge the datasource tab is blank. Can any one please tell me how do I get th

  • I can't play XBOX , error code 805a201b

    Hotmail created, but still not working error 805a201b

  • CF9 New Search Engine Issues?

    I have an older application that I have upgraded and I am only having issues with the Search Engine going form Verity to Solr. I have a simple query tht gets the content for each page and indexes it. The code works awesome in Varity - but when I run