Critical scenario for interview purpose

Hi guru's,
Plz send me some  Critical Scenario for  Interview purpose vth solution.
Thanks for all the help in advance.
Regards
Reddy.

Hi Pv,
Send me ur mail id i will forward the basic pdf doc.
regards ,
venkat

Similar Messages

  • Support issue for interview purpose yea

    Hi Experts,
    I am fresher to FICO i have 6 months experience. right now m trying to search job on the basis of FICO.
    could u please send me the support issue with soulution. bcos i need to tell the same thing in the interview panel.
    Please send me the details for support issues which are good to impress interview panel.
    Regards
    Vinay

    Hi Hassib
    Not sure what exactly you mean by "maximum number of sensors (application monitoring interfaces) does ASA  support"? there is no real limit on number of interfaces that can be monitored using the IPS module. All you need to ensure is you the right kind of traffic to the IPS so it does not get overwhlemed with a lot of traffic. This link should give the maximum thorughput supported:
    http://www.cisco.com/en/US/products/ps6120/prod_models_comparison.html
    Hope this helps!
    Thanks and Regards,
    Prapanch

  • Creating a VC Scenario for demo purpose

    Hello Guru's,
    Will anyone guide me the steps by step procedure  to create a scenario on VC  in a system to show the client.  will guide me to create a simple scenario.  It will be of great help.
    Full points to the scenario steps.

    Hi Pv,
    Send me ur mail id i will forward the basic pdf doc.
    regards ,
    venkat

  • Want Scenarios for BPM

    Hi All,
    Can anybody pl. provide some BPM Scenarios, BPM Docs, or any related Docs for BPM.
    i want some  Scenarios for BPM.
    Thanks in advance,
    Shweta

    Hi Shwetha,
    BPM is used for Stateful communications,suppose u have to delay message processing,or wait for other messages to arrive and then semd them all together,in that case use BPM.
    We will use BPM when ever we want to do the following:
    1.Controling or Monitoring of messages in XI
    2. Collect or Merge the messages in XI
    3. Split the messages in XI
    4. Multicast a Message
    5. Need to send an Alert
    6. Transformation
    With its BPM capability, SAP NetWeaver:
    • Exploits business-process efficiency by giving your business
    users the ability to directly model, manage, monitor, and
    analyze business processes
    • Enables continuous process improvement and the dynamic
    modification of business processes
    • Extends the value of your company’s core business investment
    and maximizes the return on its strategic assets by providing
    the ability to change process rules without additional IT
    investment
    • Provides greater visibility into critical business operations for
    better decision making by delivering the right information at
    the right time
    • Allows the integration of people, applications, and internal
    and external resources
    Process step types:
    Message relevant:
    Receive: We use it to receive a message. By receiving a message we are sending the data into process. We can use it to start a process. We can use it for activating or using correlations.
    Send: We use it to send either an asynchronous or synchronous message or an acknowledgement.
    Receiver Determination: We use it to get a list of receivers for sub sequent send step. It calls the receiver determination that we configured in the integration directory and returns receivers list.
    Transformation: We use it to change a message inside the process. E.g. bundling multiple messages into 1 or splits a message into multiple.
    Using this we can create N:1 or 1:N or 1:1 transformations. In general scenario 1:N transformation is possible.
    Process flow control Relevant:
    Container: We use it to set a value for target container element at runtime. Target container element and assigned value must have same value.
    Control: We use it to terminate the current process and to trigger an exception and to trigger an alert.
    While Loop: To repeat the execution of steps within the loop.
    Fork: We use it when you want to continue a process in branches that are independent of each other. E.g. to communicate with two systems that are independent to each other
    Block: We use it to combine steps that you want to execute one after the other and which are to access the local data.
    Empty: It has no influence on the process flow. We use it as a place holder for a step that has not yet been defined, and as a step with no functions for test purposes.
    Wait: We use it to incorporate a delay in the process.
    Switch: We use it to define different processing branches for a process
    T.Codes for B.P.M:
    SXMB_MONI_BPE
    SXWF_XI_SW11
    For Example a Small Explanation reg BPM for this Req we used BPM
    A background program should be scheduled to run every 10 minutes to analyse any material records that have been created or changed or deleted that have occurred to the material master records in the last minute.
    There are two Message Mappings involved in the whole scenario. First mapping is N:1 Mapping which will be used in BPM and second Mapping is 1:1 Mapping:
    1. First Message Mapping  N: 1 u2013 Mapping between IDoc (occurrence u2013 0...unbounded in u201CMessagesu201D tab) to IDoc with changed occurrence of its top node (IDOC) as 0...unbounded. This message mapping will be used in BPM u2013 transformation step.
    Description: This BPM collects all Idocs for 10 minutes which are of Message Type (ZMATMAS05) according to Receiver Partner Number (Field u2013 RCVPRN) and calls N: 1 mapping to bundle the collected Idocs in a One External Definition for that IDOC.
    Use
    You use a wait step ( ) to incorporate a delay in a process. Usually, you use a delay to define when the next step in the process is to start. You can define a delay as either a point in time or a period of time.
    At runtime, the step waits until the specified point in time is reached or the specified period of time has passed. The system then continues the process by proceeding with the next step.
    Expalined clearly how to do a file to file scenario with BPM :
    /people/krishna.moorthyp/blog/2005/06/09/walkthrough-with-bpm
    it is File>RFC>File using BPM then refer this blog.
    /people/arpit.seth/blog/2005/06/27/rfc-scenario-using-bpm--starter-kit
    BPM-1 /people/krishna.moorthyp/blog/2005/06/09/walkthrough-with-bpm
    BPM-2 /people/krishna.moorthyp/blog/2006/04/08/reconciliation-of-messages-in-bpm
    BPM-3 /people/arpit.seth/blog/2005/06/27/rfc-scenario-using-bpm--starter-kit
    BPM-4 /people/michal.krawczyk2/blog/2005/06/11/xi-how-to-retrieve-messageid-from-a-bpm
    Integratio Scenario
    /people/venkat.donela/blog/2006/02/17/companion-guide-to-integration-scenario
    /people/siva.maranani/blog/2005/08/27/modeling-integration-scenario146s-in-xi
    Schedule BPM
    /people/siva.maranani/blog/2005/05/22/schedule-your-bpm
    Use of Synch - Asynch bridge in ccBPM
    /people/sriram.vasudevan3/blog/2005/01/11/demonstrating-use-of-synchronous-asynchronous-bridge-to-integrate-synchronous-and-asynchronous-systems-using-ccbpm-in-sap-xi
    Use of Synch - Asynch bridge in ccBPM
    https://www.sdn.sap.com/irj/sdn/weblogs?blog=/pub/wlg/1403 [original link is broken] [original link is broken] [original link is broken]
    without BPM
    /people/henrique.pinto/blog/2007/08/02/syncasync-scenarios-without-bpm
    without BPM1
    /people/venkataramanan.parameswaran/blog/2007/01/18/syncasync-communication-in-jms-adapter-without-bpm-sp19
    IDOC BPM
    /people/pooja.pandey/blog/2005/07/27/idocs-multiple-types-collection-in-bpm
    To deal with Multiple sender and receivers based on the conditions we could use BPM. Its one of the feature of BPM, but its not mandatory to go for BPM for each n every case. Its depends upon scnenario.
    /people/marilyn.pratt/blog/2007/10/12/clubhouse-las-vegas-a-bpm-roadmap
    BPM Process Patterns:Repeatable Design for BPM Process Models
    http://www.bptrends.com/publicationfiles/05%2D06%2DWP%2DBPMProcessPatterns%2DAtwood1%2Epdf
    BPM Steps link : http://help.sap.com/search/highlightContent.jsp
    Regards,
    Vinod.

  • Sliding window sanario in PTF vs Availability of recently loaded data in the staging table for reporting purpose

    Hello everybody, I am a SQL server DBA and I am planning to implement table partitioning on some of our large tables in our data warehouse. I
    am thinking to design it using the sliding window scenario. I do have one concern though; I think the staging tables we use for new data loading and for switching out the old partition are going to be non-partitioned, right?? Well, I don't have an issue with
    the second staging table that is used for switching out the old partition. My concern is on the first staging table that we use it for switch in purpose, since this table is non-partitioned and holding the new data, HOW ARE WE GOING TO USE/access THIS DATA
    FOR REPORTING PURPOSE before we switch in to our target partitioned table????? say, this staging table is holding a one month worth of data and we will be switching it at the end of the month. Correct me if I am wrong okay, one way I can think of accessing
    this non-portioned staging table is by creating views, which we don’t want to change our codes.
    Do you guys share us your thoughts, experiences???
    We really appreciate your help.

    Hi BG516,
    According to your description, you need to implement table partitioning on some of our large tables in our data warehouse, the problem is that you need the partition table only hold a month data, please correct me if I have anything misunderstanding.
    In this case, you can create non-partitioned table, import the records which age is more than one month into the new created table. Leave the records which age is less than one month on the table in your data warehouse Then you need to create job to
    copy the data from partition table into non-partitioned table at the last day of each month. In this case, the partition table only contain the data for current month. Please refer to the link below to see the details.
    http://blog.sqlauthority.com/2007/08/15/sql-server-insert-data-from-one-table-to-another-table-insert-into-select-select-into-table/
    https://msdn.microsoft.com/en-us/library/ms190268.aspx?f=255&MSPPError=-2147217396
    If this is not what you want, please provide us more information, so that we can make further analysis.
    Regards,
    Charlie Liao
    TechNet Community Support

  • MTO - For logistics purpose - Effect in COPA

    Hi,
    We have a MTO scenario maintained for logistics purpose. A production order is created with reference to a Sales Order, wherein 2 or more FG are fabricated. These FG are issued to sales order as special stock. Consumption of FG is posted at time of such issue.
    Header material for the production order does not have any cost.. neither MAP nor Std cost as the inputs for such material are variable.
    Sales order is created with reference to this header material. Sales order costing is not done.
    Now we have to implement COPA, how would the COGS can be captured for this sales order. VPRS condition cannot find anything.. neither Valuation as no std cost estimate
    Please give your valuable suggestions.

    Hi,
    the given situation is just like another Make to stock scenario except that it will have products tracking to
    Sale Order.
    in this case, you must have Standard cost estimate being done and released to Material master. without that, the process does not completed.
    In this case only VPRS condition type need to carry the cost to COPA (nothing else).
    Please change the understanding of the people about this scenario.
    Best Regards
    Surya

  • Scenario for EDI

    hi can anybody tell me how the concept of EDI works?
    1)do configurations like setting logical system, partner profiles etc have to be  made?
    2)is there standard idocs like MATMAS or CREMAS that are sent via EDI?
    3)is there a step-by-step procedure for EDI

    hi anjali,
    IDocs are basically a small number of records in ASCII format, building a logical
    entity. It makes sense to see an IDoc as a plain and simple ASCII text file, even if it
    might be transported via other means.
    Any IDoc consists of two sections:
    the control record
    which is always the first line of the file and provides the administrative information.
    the data record which contains the application dependent data, as in our example below the material master data.
    We will discuss the exchange of the material master IDoc MATMAS in the
    paragraphs that follow..
    The definition of the IDoc structure MATMAS01 is deposited in the data dictionary
    and can be viewed with WE30.
    The very first record of an IDoc package is always a control record. The structure of this control record is the DDic structure EDIDC and describes the contents of the data contained in the package.
    The control record carries all the administrative information of the IDoc, such as its origin, its destination and a categorical description of the contents and context of the attached IDoc data. This is very much like the envelope or cover sheet that
    would accompany any paper document sent via postal mail.
    For R/3 inbound processing, the control record is used by the standard IDoc
    processing mechanism to determine the method for processing the IDoc. This
    method is usually a function module but may be a business object as well. The
    processing method can be fully customised.
    Once the IDoc data is handed over to a processing function module, you will no
    longer need the control record information. The function modules are aware of the
    individual structure of the IDoc type and the meaning of the data. In other words:
    for every context and syntax of an IDoc, you would write an individual function module or business object (note: a business object is also a function module in R/3) to deal with.
    The control record has a fixed pre-defined structure, which is defined in the data
    dictionary as EDIDC and can be viewed with SE11 in the R/3 data dictionary. The
    header of our example will tell us, that the IDoc has been received from a sender
    with the name PROCLNT100 and sent to the system with the name DEVCLNT100 .
    It further tells us that the IDoc is to be interpreted according to the IDoc definition called MATMAS01 .
    All records in the IDocs, which come after the control record are the IDoc data. They are all structured alike, with a segment information part and a data part which is 1000 characters in length, filling the rest of the line.
    All records of an IDoc are structured the same way, regardless of their actual
    content. They are records with a fixed length segment info part to the left, which is
    followed by the segment data, which is always 1000 characters long.
    You can view the definition of any IDoc data structure directly within R/3 with transaction WE30.
    Regardless of the used IDoc type, all IDocs are stored in the same database tables
    EDID4 for release 4.x and EDID3 for release 2.x and 3.x. Both release formats are
    slightly different with respect to the lengths of some fields.
    Depending on the R/3 release, the IDoc data records are formatted either according
    the DDic structure EDID3 or EDID3. The difference between the two structures
    reflects mainly the changes in the R/3 repository, which allow longer names starting
    from release 4.x.
    All IDoc data record have a segment info part and 1000 characters for data IDoc type definition can be edited with WE30 Data and segment info are stored in EDID4 .
    All IDoc data records are exchanged in a fixed format, regardless of the segment type. The
    segment’s true structure is stored in R/3’s repository as a DDic structure of the same name.
    The segment info tells the IDoc processor how the current segment data is structured
    and should be interpreted. The information, which is usually the only interest, is the name of the segment EDID4-SEGNAM.
    The segment name corresponds to a data dictionary structure with the same name,
    which has been created automatically when defining the IDoc segment definition
    with transaction WE31 .
    For most applications, the remaining information in the segment info can be ignored
    as being redundant. Some older, non-SAP-compliant partners may require it. E.g.
    the IDoc segment info will also store the unique segment number for systems, which
    require numeric segment identification.
    To have the segment made up for processing in an ABAP, it is usually wise to move
    the segment data into a structure, which matches the segment definition.
    When R/3 processes an IDoc via the standard inbound or outbound mechanism, the IDoc is stored in the tables. The control record goes to table EDIDC and the data goes to table EDID4.
    All IDoc, whether sent or received are stored in the table EDID4. The corresponding
    control file header goes into EDIDC.
    There are standard programs that read and write the data to and from the IDoc base.
    These programs and transaction are heavily dependent on the customising, where
    rules are defined which tell how the IDocs are to be processed.
    Of course, as IDocs are nothing more than structured ASCII data, you could always
    process them directly with an ABAP. This is certainly the quick and dirty solution,
    bypassing all the internal checks and processing mechanisms. We will not reinvent
    the wheel here.
    To do this customising setting, check with transaction WEDI and see the points,
    dealing with ports, partner profiles, and all under IDoc development.
    All inbound and outbound Documents are stored in EDID4 Avoid reinventing the
    wheel Customising is done from the central menu WEDI and see the points,
    dealing with ports, partner profiles, and all under IDoc development
    The declaration of valid combinations is done to allow validation, if the system can
    handle a certain combination.
    The combination of message type and IDoc type determine the processing algorithm. This is
    usually a function module with a well defined interface or a SAP business object and is set up
    in table EDIFCT.
    The entry made here points to a function module which will be called when the IDoc
    is to be processed.
    The entries for message code and message function are usually left blank. They can
    be used to derive sub types of messages together with the partner profile used.
    Figure 25: Assign a handler function to a message/message type
    The definition for inbound and outbound IDocs is analogous. Of course, the function
    module will be different.
    R/3 uses the method of logical process codes to detach the IDoc processing and the
    processing function module. They assign a logical name to the function instead of specifying the physical function name.
    The IDoc functions are often used for a series of message type/IDoc type
    combination. It is necessary to replace the processing function by a different one.
    E.g. when you make a copy of a standard function to avoid modifying the standard.
    The combination message type/IDoc will determine the logical processing code,
    which itself points to a function. If the function changes, only the definition of the processing codes will be changed and the new function will be immediately
    effective for all IDocs associated with the process code.
    For inbound processing codes you have to specify the method to use for the
    determination of the inbound function.
    This is the option you would usually choose. It allows processing via the ALE
    scenarios.
    After defining the processing code you have to assign it to one or several logical
    message types. This declaration is used to validate, if a message can be handled by
    the receiving system.
    The inbound processing code is assigned analogously. The processing code is a pointer to a function module which can handle the inbound request for the specified IDoc and message type.
    The definition of the processing code is identifying the handler routine and assigning a serious of processing options.
    You need to click "Processing with ALE", if your function can be used via the ALE
    engine. This is the option you would usually choose. It allows processing via the
    ALE scenarios.
    Associate a function module with a process code
    For inbound processing you need to indicate whether the function will be capable of
    dialog processing. This is meant for those functions which process the inbound data
    via call transaction. Those functions can be replayed in visible batch input mode to
    check why the processing might have failed.
    There are several common expressions and methods that you need to know, when dealing
    with IDoc.
    The message type defines the semantic context of an IDoc. The message type tells
    the processing routines, how the message has to be interpreted.
    The same IDoc data can be sent with different message types. E.g. The same IDoc
    structure which is used for a purchase order can also be used for transmitting a sales order. Imagine the situation that you receive a sales order from your clients and in addition you receive copies of sales orders sent by an subsidiary of your company.
    An IDoc type defines the syntax of the IDoc data. It tells which segments are found
    in an Idoc and what fields the segments are made of.
    The processing code is a logical name that determines the processing routine. This
    points usually to a function module, but the processing routine can also be a
    workflow or an event.
    The use of a logical processing code makes it easy to modify the processing routine
    for a series of partner profiles at once.
    Every sender-receiver relationship needs a profile defined. This one determines
    • the processing code
    • the processing times and conditions
    • and in the case of outbound IDocs
    • the media port used to send the IDoc and
    • the triggers used to send the IDoc
    The IDoc partners are classified in logical groups. Up to release 4.5 there were the
    following standard partner types defined: LS, KU, LI.
    The logical system is meant to be a different computer and was primarily introduced
    for use with the ALE functionality. You would use a partner type of LS, when
    linking with a different computer system, e.g. a legacy or subsystem.
    The partner type customer is used in classical EDI transmission to designate a
    partner, that requires a service from your company or is in the role of a debtor with
    respect to your company, e.g. the payer, sold-to-party, ship-to-party.
    The partner type supplier is used in classical EDI transmission to designate a
    partner, that delivers a service to your company. This is typically the supplier in a
    purchase order. In SD orders you also find LI type partners, e.g. the shipping agent.
    Message Type – How to Know What the Data Means
    Data exchanged by an IDoc via EDI is known as message. Messages of the same kind belong to the same message type.
    The message type defines the semantic context of an IDoc. The message type tells
    the receiverhow the message has to be interpreted.
    The term message is commonly used in communication, be it EDI or
    telecommunication. Any stream of data sent to a receiver with well-defined
    information in itis known as a message. EDIFACT, ANSI/X.12, XML and others
    use message the same way.
    Unfortunately, the term message is used in many contexts other than EDI as well.
    Even R/3 uses the word message for the internal communication between
    applications. While this is totally OK from the abstract point of view of data
    modelling, it may sometimes cause confusion if it is unclear whether we are
    referring to IDoc messages or internal messages.
    The specification of the message type along with the sent IDoc package is especially
    important when the physical IDoc type (the data structure of the IDoc file) is used
    for different purposes.
    A classical ambiguity arises in communication with customs via EDI. They usually
    set up a universal file format for an arbitrary kind of declaration, e.g. Intrastat,
    Extrastat, Export declarations, monthly reports etc. Depending on the message type,
    only applicable fields are filled with valid data. The message type tells the receiver which fields are of interest at all.
    Partner Profiles – How to Know the Format of the Partner
    Different partners may speak different languages. While the information remains the same, different receivers may require completely different file formats and communication protocols.
    This information is stored in a partner profile.
    In a partner profile you will specify the names of the partners which are allowed to
    exchange IDocs to your system. For each partner you have to list the message types
    that the partner may send.
    For any such message type, the profile tells the IDoc type, which the partner expects
    for that kind of message.
    For outbound processing, the partner profile also sets the media to transport the data to its receiver, e.g.
    • an operating system file
    • automated FTP
    • XML or EDIFACT transmission via a broker/converter
    • internet
    • direct remote function call
    The means of transport depends on the receiving partner, the IDoc type and message
    type (context).
    Therefore, you may choose to send the same data as a file to your vendor and via
    FTP to your remote plant.
    Also you may decide to exchange purchase data with a vendor via FTP but send
    payment notes to the same vendor in a file.
    For inbound processing, the partner profile customizsng will also determine a
    processing code, which can handle the received data.
    IDoc Type – The Structure of the IDoc File
    The IDoc type is the name of the data structure used to describe the file format of a specific IDoc.
    An IDoc is a segmented data file. It has typically several segments. The segments
    are usually structured into fields; however, different segments use different fields.
    The Idoc type is defined with transaction WE30, the respective segments are defined
    with transaction WE31.
    Processing Codes
    The processing code is a pointer to an algorithm to process an IDoc. It is used to allow more flexibility in assigning the processing function to an IDoc message.
    The processing code is a logical name for the algorithm used to process the IDoc.
    The processing code points itself to a method or function, which is capable of
    processing the IDoc data.
    A processing code can point to an SAP predefined or a self-written business object
    or function module as long as they comply with certain interface standards.
    The processing codes allow you to easily change the processing algorithm. Because
    the process code can be used for more than one partner profile, the algorithm can be
    easily changed for every concerned IDoc.
    The IDoc engine will call a function module or a business object which is expected
    to perform the application processing for the received IDoc data. The function
    module must provide exactly the interface parameters which are needed to call it
    from the IDoc engine.
    In addition to the writing of the processing function modules, IDoc development requires the
    definition of the segment structures and a series of customising settings to control the flow of the IDoc engine.
    In Summary
    Customise basic installation parameters
    Define segment structures
    Define message types, processing codes
    Segments define the structure of the records in an IDoc. They are defined with transaction WE31.
    Check first, whether the client you are working with already has a logical system
    name assigned.
    The logical system name is stored in table T000 as T000-LOGSYS. This is the table
    of installed clients.
    If there is no name defined, you need to create a logical system name . This means
    simply adding a line to table TBDLS. You can edit the table directly or access the
    table from transaction SALE.
    The recommended naming convention is
    sysid + "CLNT" + client
    If your system is DEV and client is 100, then the logical system name should be:
    DEVCLNT100.
    System PRO with client 123 would be PROCLNT123 etc.
    The logical system also needs to be defined as a target within the R/3 network.
    Those definitions are done with transaction SM59 and are usually part of the work
    of the R/3 basis team
    see this link for all info on idocs
    http://abapprogramming.blogspot.com/2007/11/abap-idocs-basic-tools.html
    thanks
    karthik
    reward me points if usefull

  • Calendar malfunction - calendar keeps freezing and displaying text I copied for another purpose - any advice?

    My 6-month old macbook air's calendar keeps freezing up.  when I open calendar, it displays across the top of the calendar some text I copied for another purpose and which I pasted into a word document yesterday.  When this happens there's nothing I can do to escape it -- all commands result in the "thud" tone.  So I keep having to force quit calendar. 
    Any advice?

    Hi Dylan2412,
    According to your description, you'd like to pass the ComboBox.Text to the second form's TextBox.
    In Form1:
    private void button1_Click(object sender, EventArgs e)
    Form2 f2 = new Form2(this.comboBox1.Text); //pass the comboBox1.Text
    f2.Show();
    In Form2:
    public Form2(string s) //get the string;
    InitializeComponent();
    this.textBox1.Text = s;
    You could get the comBoBox refer to this sample above.
    If you have any other concern regarding this issue, please feel free to let me know.
    Best regards,
    Youjun Tang
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • O365 federation for testing purposes

    hello,
    could i set up federated trust from my on premisse infrastructure to o365 (using adfs and dyrsync and localy issuated certificate) just for testing purposes , and after test period i would just brake the trust. Could that have any circumstances, because
    in a month or two i would like to migrate to o365?
    regards,
    Mario

    You can absolutely set up Exchange Online and treat it as a separate organization, create a federated trust and run through testing.  It is important to remember that "separate organization" means not using your existing SMTP domain, so you'll likely
    want to avoid ADFS/DirSync for the test period in this scenario.
    An alternative to consider would be setting up Office 365 in a hybrid coexistence configuration, requiring ADFS and DirSync, but putting Exchange Online in a scenario where it operates in tandem with your local Exchange organization.  This coexistence
    would allow you to test with your existing, production SMTP domain and a single GAL view, while still allow you to break off permanently without impact to your production Exchange environment if you chose to do so.

  • Need an approach regarding reporting for a one-to-many primary-child relationship, where there are more than three child objects for a primary object for reporting purpose

    Business Scenario- We have a parent organization with 6 different Business Units.One BU requires 9 stages for for Opportunity(Tender) Tracking.The client requirement is to show the basic details of the tender at the header level and to show details specific to individual sales stage as different tabs.There will be multiple opportunity members added as opportunity team and will be responsible for capturing details specific to individual sales stage(tab). The Tab should be enabled and disabled based on the role. Reporting is required against each stage with specific fields of child objects against each opportunity.
    We created multiple children entities under the oportunity(one to many mapping) but we are unable to add more than 3 child objects for a primary object for reporting purpose.
    Kindly suggest what needs to be done to meet the requirement

    Can you provide the exact steps you took to  "created multiple children entities under the oportunity" ?
    Jani Rautiainen
    Fusion Applications Developer Relations
    https://blogs.oracle.com/fadevrel/

  • Rbdstate - Send ALEaudit in time critical scenario

    Hy all,
    how can I use RBDSTATE report to send Acknowledge IDOC to my XI 7.0 server in a time critical scenario? I use Ack IDOCs in a time critical complex BPM that attends audit to continue with the following steps of transformation and send xml messages to ERP.
    Problem is that an OSS note says to schedul job RBDTSATE with a time period higher 5 minutes; this time is too high for us (BPM contain more steps that needs an ERP ack).
    How can I solve this issue to simplify and make more fast the BPM?
    Thanks in advance,
    Marco

    Hi Chris,
    Is the "subVI" in question really a subVI of the higher-priority VI?  If yes, that would make it run at higher-priority as well -- probably not what you want to do. 
    See this for further details:  http://digital.ni.com/public.nsf/allkb/D7E975105812F0C586256A6B005B4957
    I would keep the two VIs separate (to keep their priorities separate) and transfer data via LV2-style globals between them.
    -Khalid

  • Which CAPS 6 Project Type is best for integration purposes?

    Which CAPS 6 Project Type is best for integration purposes?
    CAPS 6 is the company standard for integrating applications so I'm looking to get started converting XML from one format to another.
    Which project type is best to use for this scenario?
    Thanks in advance.

    As John suggested you should properly post this question in an Visual Studio forum:
    https://social.msdn.microsoft.com/Forums/en-US/home?forum=visualstudiogeneral&filter=alltypes&sort=lastpostdesc
    Visual C++, C#, VB.Net are all running on the DOT.NET enviroment and are different languages that have no difference in functionality.
    C and C++ does not run on the DOT.NET platform, but directly on the CPU.
    C# does also deals with processes and threads, and you can load DLL's at runtime/compile time if you want.
    C# also have console applications.
    http://technicaltrix.blogspot.dk/

  • Audio for Interviewing Equipment Question

    I'm in the process of creating a series of podcasts and instructional videos and need to be able to capture the best audio quality possible using 'prosumer' level equipment. The scenario for capturing this audio will be in a two-person interview setting. These interviews will be published on the internet as podcasts and broadcast on a local cable TV station.
    Right now I own a Sennheiser headset microphone with and Andrea external USB soundcard, and a Sony lavalier microphone.
    Based on my research, it seems like if I purchase a Beachtek DXA-2S XLR adapter I should be able to achieve my objective with the microphones I currently own.
    Would you please let me know if my assessment is accurate, and if not, what I need to do/purchase to get the audio quality I need?
    Thanks!
    core duo   Mac OS X (10.4.6)  

    Hi,
    Can you post the .cpp file that Zac made for you?
    It’s kinda hard to help you if I can’t work out what sort of callback you were trying to make.
    Btw,
    Based on your original post...
    >In the examples you gave me years ago, I made following declaration:
    >AEffect *main (audioMasterCallback audioMaster) { ...
    >}
    >In CS4 it complained so I made the following change:
    >int *main (audioMasterCallback audioMaster) { ...
    >}
    >
    ...I just grep'ed for the "audioMasterCallback" in all CS3|4|5 SDKs for AE and Premiere and it's not there.
    I'm wondering how you compiled at all when there is no definition for it in the headers?
    D:\My Downloads\Adobe\SDK>dir /ad
    Volume in drive D is MEDIA
    Volume Serial Number is E20D-79C1
    Directory of D:\My Downloads\Adobe\SDK
    09/10/2010  07:34 AM   
    09/10/2010 07:34 AM
    01/24/2010 04:52 PM
    Adobe After Effects CS3 SDK
    09/10/2010 07:34 AM
    Adobe After Effects CS4 SDK
    09/10/2010 07:34 AM
    Adobe After Effects CS5 Win SDK
    01/24/2010 04:52 PM
    Adobe Premiere Pro CS3 r1 SDK
    09/10/2010 07:31 AM
    Premiere Pro CS4 r1 SDK Win
    09/10/2010 07:32 AM
    Premiere Pro CS5 Win SDK
    0 File(s) 0 bytes
    8 Dir(s) 125,591,977,984 bytes free
    D:\My Downloads\Adobe\SDK>
    grep audioMasterCallback . -rn
    D:\My Downloads\Adobe\SDK

  • Best application hosting scenario for weblogic servers

     

    Frankly speaking, it depends on the company interest, if you have a multi
    environment, it is always better to standardize a single J2EE platform
    server, Weblogic could be your choice, it can almost runs every type of
    applications in this world. Check out some customers for BEA at BEA website
    for different types of business from ViSA to AirFlight company.
    "Vinay S" <[email protected]> wrote in message
    news:[email protected]..
    >
    Greetings...
    What is the best/optimum application hosting scenario for weblogic serversat
    an organization level? Assuming there are many applications to bedeployed, wd
    the best solution be to create a managed server for each such applicationand
    deploy the application on it?
    What factors are critical to detemine the number of applications that canbe hosted
    on a admin/managed server? Are any stats available?
    Please also consider the monitoring view point for the servers.
    Thanks...

  • RAC - Determine all instances whether enabled/disabled for inventory purpose

    Hello All,
    I can't seem to figure out to get all the instance and hostnames in my cluster when some of the instances are disabled.
    My scenario is that I have 5 nodes and 3 of the 5  are disabled in clusterware.
    I'm loging into one of the active instances, and when I select from  GV$INSTANCE, I only see the active/enabled instances. On the other hand, the parameter "cluster_database_instances" shows 5.
    For inventory purposes I need to determine all 5 instance names, all 5 hostnames. How can I do this from SQL.
    Thanks in advance for any assistance or pointers.
    Sincerely
    JS

    Hi,
    If you want to query on the database, you can use the one below:
    select p2.value inst_id, p1.value instance_name, decode(i.status,null,'OFFLINE',i.status) status
    from gv$parameter p1 join gv$parameter p2 on (p1.inst_id = p2.inst_id) left join gv$instance i on (p2.inst_id = i.instance_number)
    where p1.name = 'instance_name' and p2.name = 'instance_number';
    Regards.

Maybe you are looking for

  • Microsoft Project Import is not working

    Hi All When we try to import microsft project which is in the xml format we are getting following error An error occurred while processing your request. Please try again. and back button below this message to navigate back. The project is not even ge

  • Need a table with asset number and document number

    Hi to all experts. Im in a support project. There is an issue in the customized report the total expense for february is coming 10,322 dollars which is 322 dollars extra it should be 10,000 .but i see the table ANEP the value is 10,322 dollars,when i

  • Keynote freezes when using smart builds.

    I've got an issue using smart builds. Any time I click on the smart build button on the tool bar or the actions tab in the inspector window, I get the spinning beach ball of death. I'm not sure if it's an issue with hardware or software. I've tried d

  • Configuration Download Issue

    Hi, We had to RMA our WLC 4402.  The replacement shipped with 3.5, so we had to go through a series of upgrades to get it to 5.2.193.0.  Now that we're on that release, we're trying to restore the backup of the configuration, which was uploaded from

  • VBA Macro code to refresh query

    Hi people, First post... I am running BI7.1 and have set up a workbook with various queries that have to be run on separate days. I have a cover page that I have set up and want to have buttons on this page that when clicked will only refresh the rel