Is my approach correct

Hi All,
I have to call a custom logic when a error message gets thrown in a seeded page.
This message is coming thru the following flow :
page CO --> normal Java Class --> PL/SQL procedure
Now what is the best way to extend the functionality so that my custom logic can be called ( sending a notification) when the error arises. As far as I could imagine , I can do it like , extending the controller and catching the exception and calling the custom logic and then throwing it again so that it appears on the seeded page.
Pls suggest if this is the correct way or any other approach can be used.
Thanks,
Srikanth

Hi Kumar,
Since the method is private , it will be referred by the methods of the seeded class only , and since extended CO is calling the public methods using super.method() and the base class will have access to the private mthod , do we really need to copy paste the private in my new CO. Pls correct me if I am wrong.
Now I am able to catch the exception using "OAException.getDetailMessage()" , by trial and error method :) and invoking the workflow. Thanks for ur inputs.
Thank you,
Srikanth

Similar Messages

  • Is this approach correct?

    Hi All,
    Oracle 10g
    I have a before insert or update trigger for each row which calls procedure P1.
    using the out parameter of the procedure to pass status.(whether the procedure succeded or not)
    This status will be used for the execution of the next procedure and so on.
    since i can't update the table (mutating issue) i am updating the :new.column value with the status from the procedure (out parameter).
    This status is used to check and call the next procedure.
    I am able to update table with the latest status.
    Is there any flaw in this approach?
    Thanks.
    Trigger Code:
    create or replace trigger proc_status_trg
    before insert or update on proces_status
    for each row
    declare
    tempstatusval1 varchar2(3);
    tempstatusval2 varchar2(10);
    tempeco varchar2(3);
    tempstatus varchar2(10);
    tempfs1 varchar2(3);
    tempfs2 varchar2(10);
    begin
    if :new.sl_flag='N' and :new.process_stage is null then
    --call the map procedure     
    dbms_output.put_line ('Calling Map Val Procedure');
    pr_map_validation(:new.asgn_id,:new.process_stage,:new.sl_flag,:new.aliq_flag,tempstatusval1,tempstatusval2);
    :new.sl_flag:=tempstatusval1;
    :new.process_stage:=tempstatusval2;
    end if;
    if :new.sl_flag='X' and :new.process_stage = 'preeco' then
    --dbms_output.put_line ('Calling Pre-Eco Val Procedure');
    pr_preeco_validation(:new.asgn_id ,:new.process_stage,:new.sl_flag,:new.aliq_flag,tempeco,tempstatus );
    :new.sl_flag:=tempeco;
    :new.process_stage:=tempstatus;
    end if;
    if :new.sl_flag='Z' and :new.process_stage = 'SPV' then
    --dbms_output.put_line ('Calling Pre-Eco Val Procedure');
    fs_validation(tempfs1,tempfs2);
    end if;
    end;

    user545846 wrote:
    Please give your comments. Please understand that we're all volunteers here. If you need immediate replies, there are plenty of commercial entities out there that will be happy to provide a quicker SLA on questions. Otherwise, please be patient-- it may take a few hours for someone to reply.
    That said, is there a reason that you aren't using exceptions? That would seem like the far simpler solution.
    Justin

  • PI 7.11 mapping lookup - data enrichment - appropriate approach?

    Hi guys,
    we just upgraded from PI 7.0 to PI 7.11.
    Now I´m facing a new scenario where an incoming order have to be processed.
    (HTTP to RFC)
    Furthermore each item of the order have to be enriched by data looked up in a SAP ERP 6.0 system.
    the lookup functionality could be accessed during RFC or ABAP Proxy
    With the new PI release we have several possibilities to implement this scenario, which are ...
    (1) graphical RFC Lookup in message mapping
    (2) ccBPM
    (3) using of the lookup API in java mapping
    (4) message mapping RFC Lookup in a UDF
    Because of performance reason I prefer to make use of the Advanced Adapter Engine, if this is possible.
    Further there should only one lookup request for all items of the order instead of each order item.
    I tried to implement possiblity (1), but it seems to be hard to fill the request table structure of the RFC function module. All examples in SDN only uses simple (single) input parameters instead of tables. Parsing the result table of the RFC seems to be tricky as well.
    Afterwards I tried to implement approach (3) using an SOAP adapter as Proxy with the protocol XI 3.0.
    (new functionality in PI 7.11)
    But this ends up in a crazy error message so it seems that SOAP adapter could not used as proxy adapter in this case.
    ccBPM seems also be an good and transparent approach, because there is no need of complex java code or lookup api.
    So  the choice is not so easy.
    What´s the best approach for this scenario??
    Are my notes to the approach correct or do I use/interpret it wrong?
    Any help, ideas appreciated
    Kind regards
    Jochen

    Hi,
    the error while trying to use the soap channel for proxy communication is ....
    com.sap.aii.mapping.lookup.LookupException: Exception during processing the payload. Error when calling an adapter by using the communication channel SOAP_RCV_QMD_100_Proxy (Party: , Service: SAP_QMD_MDT100_BS, Object ID: 579b14b4c36c3ca281f634e20b4dcf78) XI AF API call failed. Module exception: 'com.sap.engine.interfaces.messaging.api.exception.MessagingException: java.io.IOException: Unexpected length of element <sap:Error><sap:Code> = XIProxy; HTTP 200 OK'. Cause Exception: 'java.io.IOException: Unexpected length of element <sap:Error><sap:Code> = XIProxy; HTTP 200 OK'.
    so this feature seems not to work for soap lookups, isn´t it.
    Kind regards
    Jochen

  • Time to upgrade to Xserve-Advice Please

    Hi,
    I'm currently the default "IT" guy at my company. We are a small company without any kind of dedicated IT department. We started out 15-20 years ago with a single Mac and have grown throughout the years. We now have 16 stations with 4 of them being PCs. At first we had our network setup with a simple file sharing setup. When we need to share files, each computer would connect with which ever other computer we needed to. This worked ok for a while but then we had the problem of archiving all of our data. I would manually compile everyone's work on a specific project and back it up to an external hard drive. This was not the most efficient solution, but it was effective and we used this method for many years. We've recently expanded our business to include Maya work which is done on PCs. At the same time we introduced 2x 2TB Lacie Ethernet network storage devices. Now I have everyone working within the same folder on the Lacie drives. This makes my life easier as far as archiving. Part of the problem I've got now is with the PCs. We've had them for a few months now and I've never really gotten them fully integrated into our environment. I have a single PC at home, but the work environment has always been completely Mac based. Meaning I have some PC knowledge but really none as far as networking goes. I know that we need to get a server to accomplish a couple of different things.
    1) Better integration of PCs and Macs
    2) Improved Rendering for Maya (PC based Maya)-We are going to implement a render farm into the mix as well- we are currently looking at a Boxx system and an SGI system as well
    3) Central Storage with backup
    4) Improved speed (with the Lacie storage devices, there is some slowdown when first connecting)
    5) Remote login--i.e. something like the ability to login from home and set up a render over the weekend
    I would prefer to go with an Xserve because of my familiarity with Mac products. My problem is that when I look at the specs for the Xserve, I really don't know what anything is. I have no idea how to price anything for the boss.
    It makes me feel like I am getting in over my head, but at the same time, I am confident enough in my abilities that if I get the correct system set up, I should have no problem administering it for our needs.
    A) I imagine I'd want to go with the 8 core to start with
    B) What kind of performance boost-and in what areas-would I get with upgrading to the 2.93 or 2.66 GHz?
    C) I imagine I should get the most RAM we could fit into the budget?
    D) what is the Solid State drive? Do I need it?
    E) I can have 3 hard drives. How many do I need? why would I need more than one?
    F) Extra power supply..I understand why this makes sense
    G) XSan? what is it? do I need it?
    We currently also only have everything connected via ethernet. What's the deal with Fibre channel? Do I need to install a Fibre channel card in each workstation plus run all new wiring? If so then this is a very pricey deal and way unrealistic as far as budget at this time.
    I'm also wondering about the render farm. Will either a Boxx system or a SGI system integrate well with an Xserve when rendering PC Maya files? Would there be an option to configure an Xserve as a render farm? If so could I render PC Maya files on it?
    Sorry for the long winded post. I'm lost and need some help from some experts. Hopefully there is someone out there who can lead me in the right direction.
    Thanks for your help.
    Phil

    Hi
    My two cents.
    +I know that we need to get a server to accomplish a couple of different things+
    Apart from load balancing etc you don't really mention anything that either a 3rd-Party product might provide and what you've already got in place.
    +1) Better integration of PCs and Macs+
    With the environment you're describing you don't specifically need a Server for this although it depends on what you want to achieve.
    +2) Improved Rendering for Maya (PC based Maya)-We are going to implement a render farm into the mix as well- we are currently looking at a Boxx system and an SGI system as well+
    Perhaps XGrid might fit the bill?
    +3) Central Storage with backup+
    Depending on what Configuration you choose (Standard, Workgroup or Advanced) this could be easily achieved with what's already built in to the Server OS. On a side note the XServe is not the Server. The OS is. OSX Server can be installed on any qualifying hardware. The XServe is dedicated Server hardware built specifically to perform a Server (with all that that means) role. Redundancy is built-in. With what you're describing perhaps a suitable MacPro would be better?
    With the XServe you have to consider a rack as well as noise if a rack is not an option.
    +4) Improved speed (with the Lacie storage devices, there is some slowdown when first connecting)+
    This is probably due to the fact the drives are connected to a client machine and OS that is now not coping with the amount of client access required. I'm guessing someone is also working at that client mac?
    +5) Remote login--i.e. something like the ability to login from home and set up a render over the weekend+
    This might be possible with XGrid although you might have a real problem with bandwidth
    +I would prefer to go with an Xserve because of my familiarity with Mac products. My problem is that when I look at the specs for the Xserve, I really don't know what anything is. I have no idea how to price anything for the boss.+
    +It makes me feel like I am getting in over my head, but at the same time, I am confident enough in my abilities that if I get the correct system set up, I should have no problem administering it for our needs.+
    Leopard Server is a whole new ballgame and if not approached correctly Apple's marketing slogan of "No IT Required" may seem like a sick joke
    +A) I imagine I'd want to go with the 8 core to start with+
    If you have the budget get the best you can afford
    +B) What kind of performance boost-and in what areas-would I get with upgrading to the 2.93 or 2.66 GHz?+
    This would depend on what you want and what you're expecting. You won't really know until some time has passed. Besides how could you compare it? Unless you buy and bench test them both you can't really know? It might be possible you know someone who has exactly the same environment as you, doing exactly the same work and wanting exactly the same solution. However if you use google you should be able to find some performance bench test results? Whether they're applicable to your environment is another question.
    +C) I imagine I should get the most RAM we could fit into the budget?+
    See (A)
    +D) what is the Solid State drive? Do I need it?+
    http://support.apple.com/kb/SP511
    http://www.apple.com/xserve/features/storage.html
    http://blogs.computerworld.com/newapple_xserves_offer_additional_ssdbay
    +E) I can have 3 hard drives. How many do I need? why would I need more than one?+
    See (A). To be honest only you would really know? With the optional Apple RAID Card you could have 3x1TB Drives as a RAID 5. This provides some measure of redundancy (which is not a back-up) as well as performance. However you could as easily use a MacPro that has the 4 drive bays. Potentially 3+TB of Storage. A simple rule of thumb would be: How much data do I have now? Does the Storage I have now cope with it? Will the amount of Data grow over time? Do I have enough storage to cope with that growth?
    +F) Extra power supply..I understand why this makes sense+
    The XServe can be built-to-order with two PSUs providing redundancy. You should factor in a UPS as well. You don't want a power cut to potentially ruin all your data?
    +G) XSan? what is it? do I need it?+
    http://manuals.info.apple.com/en/xsan/XsanGettingStarted.pdf
    I think this would be way too much in terms of your environment as well as your budget. However read the pdf and come to a reasoned judgment.
    +We currently also only have everything connected via ethernet. What's the deal with Fibre channel? Do I need to install a Fibre channel card in each workstation plus run all new wiring? If so then this is a very pricey deal and way unrealistic as far as budget at this time.+
    No to the first two questions. Read the XSan pdf.
    +I'm also wondering about the render farm. Will either a Boxx system or a SGI system integrate well with an Xserve when rendering PC Maya files? Would there be an option to configure an Xserve as a render farm? If so could I render PC Maya files on it?+
    I have no experience of either of the two products you mention. Perhaps someone who does may post?
    Does this help? Tony

  • Capture FPM Event in another WDC

    Hello Friends,
    I trying to capture the name of event liek EDIT,CLOSE,NEXT etc.. that are defined in FPM in a std WDC.
    I need this because based on the ID of the button , i need to process some information.
    Could you pls let me know how to handle this or any code would be fine. ?
    Regards,
    Vinay

    Hello Jens..
    I am not trying to go into EDIT mode in std..WDC delivered by SAP .. but thru enh implementation..
    I do have dev authorization.
    The interface IF_FPM_UI_BUILDING_BLOCK  is already implemented in  WDC
    What I did was..
    In the component controller methods.. the method... PROCESS_EVENT  is not in EDIT mode..
    But i wrote  i wrote the code for capturing the instance of the FPM event in the post-exit of that method..
    When i tested this in portal.. this code not executed and hence unable to caputre the event.
    Is the design approach correct ? or is there another way
    Many thanks..
    Regards,
    Vinay
    Edited by: Vinay Reddy on Feb 6, 2012 12:12 PM
    Edited by: Vinay Reddy on Feb 6, 2012 12:24 PM

  • Returning a parameter back to UIX app

    Hi I am wondering if I am doing this correctly or if there is a better way.
    I have created a UIXML page with two choice boxes. The second is populated by the selection of the first by querying a database with the first selection.
    I have an event that is fired via my primaryClientAction. This event receives the value from the first choice box.
    Okay now for my question:
    I am taking this value and creating a sql-only view object with an application module. Is this approach correct? Do I need to create an iterated view of this data to pass back to the xml page? What is the best way to give the result set from the query back to the 2nd choice box?

    You can supply the second choice with either a List or a Map. The best way to query the values is out of my domain.

  • How to send a file using IOCP?

    When using blocking sockets, all I had to do to send a file was to open the file and loop through it and send it in chunks.
    But I find sending a file using overlapped sockets to be more challenging. I can think of the following approach to do it:
    I open the file and send the first chunk, and I keep track of the file handle and file position.
    Now when I get a completion packet indicating that some data has been sent, I check to see if the socket is currently in the process of sending a file, and if it is, I retrieve the file handle and file position and send the next chunk.
    I repeat step 2 until I reach the last chunk in the file, and then I close the file.
    Is this approach correct?
    Note: I don't want to use TransmitFile().

    This approach is more or less correct, but maybe you'd have to know some more things.
    If send "returns" it means, that you buffer has been copied into the internal buffer of system or the network interface card or whatever... in general it means, that you can free/reuse the buffer you have used, but it doesn't mean, that the data
    has been delivered (it does not even mean it has been sent already).
    That's why I'm normally using some flow-control (messages from the receiver) to verify the real data flow.
    The next point is, that you shouldn't read from the file only after you got the ok that the first chunk has been sent. You should read the data as soon as possible so that you can respond much quicker to a send-complete-message. I'd recommend to send using
    multiple buffers.
    Rudolf

  • Advanced template to drill down from one dimension to another

    Dear all:
    Situation
    I am trying to create an report that displays my Accounts by month. In addition, after I double-click each account (if it is a base member), it drills down to open Entity (in my case Cost Center) associated with that Account number. In the Entity drill-down section, once the Entity becomes a base member, it drills down to IO (internal orders) associated with a particular cost center.
    Data
    1. I tried to manipulate existing "Double Drill Down" dynamic template, and disabled the column expansion (because my columns consist of months of actual vs. budget and some ratio calculation). The problem is that after I drilled down, my EVGTS formula was not replicated in the other dimension
    2. I tried to integrate EVEXP into EVDRE but was unsuccessful too... The expansion worked as if everything got expanded at the same time.
    Question
    Is my approach correct? Is this an even good practice to do, or I should keep it as standard as possible?
    Thank you!
    Have a great day!
    Sincerely,
    Brian

    The double-drill down dynamic template accommodate my need, except I didn't modify the report to my format correctly.

  • Do you have an idea how to improve the performance ?

    Hi All,
    Greeting,
    I'm doing SEM IP. Regarding the performance, do you have some thought about this ?
    So I have planning report for project report . As we know, if we forecast against project, means the date itself is the life of the project itself.
    It means it could be more than 10 years (forecast period) and 10 years (actual period). Currently I segregate between actual and forecast into different info cube .
    But the performance of the planning report is slow now. Do you have an idea about this how to increase the performance. The performance I mentioned here is when we're going to the report (after putting in the value in the selection screen).
    The other question, at this moment, I have a multiprovider than this multi provider consist 2 info cubes ( actual and forecast ). Than my aggregation is sitting on top of that multi-provider .
    My question whether that's approach correct or not ? Or do we have to create 1 aggregate (only for forecast), than I have multi-provider consisting forecasting aggregation and actual cube .
    than my query will sit on top of that multi-provider ?
    Which one is better ??
    Thanks a lot all,
    really need your help,

    Hi,
       For the performance tuning, you can consider any of the following three methods,
    1. Indices
    With an increasing number of data records in the InfoCube, not only the load but also the query performance can be reduced. This is attributed to the increasing demands on the system for maintaining indexes. The indexes that are created in the fact table for each dimension allow you to easily find and select the data.
    2. Partitioning
    By using partitioning you can split up the whole dataset for an InfoCube into several, smaller, physically independent and redundancy-free units. Thanks to this separation, performance is increased when reporting, or also when deleting data from the InfoCube.
    3. Aggregates 
    Aggregates make it possible to access InfoCube data quickly in Reporting. Aggregates serve, in a similar way to database indexes, to improve performance.
    4. Compressing the Infocube
    Infocube compression means aggregation of the data ignoring the request idu2019s. After compression, the system need not perform aggregation using the request ID every time you execute a query.
    And I feel that as per your scenario, you need to do first compress the data based on user requirements and have only the required data in the infocube.
    And for the approach regarding the Aggregation level design, choosing between the two approaches depends on the user requirements. For example,
    If you have aggregation level created on top of multiprovider containing actual and forecast cube, in your report (created on top of aggregation level) you can view the key figure values present in both the cubes, which is not possible in the other approach.
    So this approach is suited if your requirement is to view the records from both the cubes in your report (Comparing planning and actual values).
    The second approach is used if your requirement is only to report on planning forecast cube.
    Hopes this solves your issue.
    Regards,
    Balajee

  • Error while Inserting data into flow table

    Hi All,
    I am very new to ODI, I am facing lot of problem in my 1st interface. So I have many questions here, please forgive me if it has irritated to you.
    ========================
    I am developing a simple Project to load a data from an input source file (csv) file into a staging table.
    My plan is to achieve this in 3 interfaces:
    1. Interface-1 : Load the data from an input source (csv) file into a staging table (say Stg_1)
    2. Interface-2 : Read the data from the staging table (stg_1) apply the business rules to it and copy the processed records into another staging table (say stg_2)
    3. Interface-3 : Copy the data from staging table (stg_2) into the target table (say Target) in the target database.
    Question-1 : Is this approach correct?
    ========================
    I don't have any key columns in the staging table (stg_1). When I tried to execute the Flow Control of this I got an error:
    Flow Control not possible if no Key is declared in your Target Datastore
    With one of the response (the response was - "FLOW control requires a KEY in the target table") in this Forum I have introduced a column called "Record_ID" and made it a Primary Key column into my staging table (stg_1) and my problem has been resolved.
    Question-2 : Is a Key column compulsary in the target table? I am working in BO Data Integrator, there is no such compulsion ... I am little confused.
    ========================
    Next, I have defined one Project level sequence. I have mapped the newly introduced key column Record_Id (Primary Key) with the Project level sequence. Now I am got another error of "CKM not selected".
    For this, I have inserted "Insert Check (CKM)" knowledge module in my Project. With this the above problem of "CKM not selected" has been resolved.
    Question-3 : When is this CKM knowledge module required?
    ========================
    After this, the flow/interface is failing while loading data into the intermediar ODI created flow table (I$)
    1 - Loading - SS_0 - Drop work table
    2 - Loading - SS_0 - Create work table
    3 - Loading - SS_0 - Load data
    5 - Integration - FTE Actual data to Staging table - Drop flow table
    6 - Integration - FTE Actual data to Staging table - Create flow table I$
    7 - Integration - FTE Actual data to Staging table - Delete target table
    8 - Integration - FTE Actual data to Staging table - Insert flow into I$ table
    The Error is at Step-8 above. When opened the "Execution" tab for this step I found the message - "Missing parameter Project_1.FTE_Actual_Data_seq_NEXTVAL RECORD_ID".
    Question-4 : What/why is this error? Did I made any mistake while creating a sequence?

    Everyone is new and starts somewhere. And the community is there to help you.
    1.) What is the idea of moving data from stg_1 and then to stg_2 ? Do you really need it for any other purpose other than move data from SourceFile to Target DB.
    Otherwise, its simple to move data from SourceFile -> Target Table
    2.) Does your Target table have a Key ?
    3.) CKM (Check KM) is required when you want to do constraint validation (Checking) on your data. You can define constraints (business rules) on the target table and Flow Control will check the data that is flowing from Source File to Target table using the CKM. All the records that donot satisfy the constraint will be added to E$ (Error table) and will not be added to the Target table.
    4.) Try to avoid ODI sequences. They are slow and arent scalable. Try to use Database sequence wherever possible. And use the DB sequence is target mapping as
    <%=odiRef.getObjectName( "L" , "MY_DB_Sequence_Row" , "D" )%>.nextval
    where MY_DB_Sequence_Row is the oracle sequence in the target schema.
    HTH

  • Web Service Proxy in OAF project

    Hi
    I am trying to create a Web Service proxy class that I can use to call a web service from a custom OAF page.
    I am doing this by right clicking on the OAF project (12i project using 10.1.3.3.0.3 of JDev with OAF ext) and choosing New -> Web Services -> Web Service Proxy.
    In doing so I get the 'Create Web Service Proxy' wizard and can choose the wsdl which I have downloaded locally. On choosing the wsdl (Search.wsdl) saying the following error -
    The name .proxy.SearchSoapImpl is not a valid java class name.
    If however I create a New empty project (not OAF) and follow the same steps it creates the proxy and its classes successfully.
    I am assuming i can now include these in my OAF project, but wondered if anyone has seen this error and if so is there a reason as to why you can create the proxy in an OAF project.
    Is the approach correct above??
    Robert

    OK no-one appears to be commenting on this..
    I've got this working by creating my proxy class in a new OAF project and including it in the project that will be calling it.

  • Getting APPSRV_JNDI_LOOKUP_ERROR while calling CAF Ext Serv from WebDynpro

    Hi,
    Iam developing a Composite application using CAF External Service & Application Service with a Web Dynpro UI wherein all the business logic is developed and invoked as web services. I do not face any problem in generating project code, building, deploying and running the application in my machine. But when I deploy the relevant .ear files to some other machine, I get APPSRV_JNDI_LOOKUP_ERROR and Iam unable to perform any operation which invokes a web service.
    Iam only trying to deploy in some other machine the following .ear files available in my machine available at ...\LocalDevelopment\DCs\sap.com\project\... folder :-
    sap.com~project.ear
    sap.comprojectmetadata.ear
    sap.comprojectpermissions.ear
    sap.comprojectwebdynpro.ear
    The following is the exception trace that I get :-
    ===========================================================
    Message : APPSRV_JNDI_LOOKUP_ERROR
    [EXCEPTION]
    com.sap.caf.rt.exception.ServiceException: Object not found in lookup of ZWSD__MATERIAL__SAVEDATA.
         at com.sap.pxwebservice.utils.HomeFactory.getLocalHome(HomeFactory.java:60)
         at com.sap.pxwebservice.appsrv.materialsavedata.MaterialSaveDataBean.getZWSD__MATERIAL__SAVEDATA(MaterialSaveDataBean.java:341)
         at com.sap.pxwebservice.appsrv.materialsavedata.MaterialSaveDataBean.MaterialSaveData(MaterialSaveDataBean.java:315)
         at com.sap.pxwebservice.appsrv.materialsavedata.MaterialSaveDataLocalLocalObjectImpl0.MaterialSaveData(MaterialSaveDataLocalLocalObjectImpl0.java:103)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    ===========================================================
    Please let me know :-
    1) I find that always one more file called .ear is created under ...\LocalDevelopment\DCs\sap.com\project\webdynpro\_comp
    But Iam unable to deploy this because it always throws invalid ear error. If I need to deploy this also, how do I do it ?
    2) Is my deployment approach correct ? Is there any other simpler approach ?
    3) If I need to deliver the project binaries to a third party how do I package them to ensure that it can be re-deployed at the other end ?
    Thanks in advance,
    Regards,
    Rajkumar

    Looks like a known issue with missing jar files.
    Please refer to Oracle support document - [ID 1332553.1]

  • Development of new functionality

    We have a requirement where we need to create a new functionality called 'Purchase Proposal' . This will be created with reference to either a shopping cart, a Bid or Live auction and will be referred while creating a PO in SRM.
    I am using the following approach for this.
    1) Create a transaction in GUI . This will involve creation of Z tables, writing an ABAP code for the logic, creating a number range object etc.
    2) After testing this transaction successfully in GUI, Create a web service & publish it on ITS by which this transaction can be accessed through webgui.
    3) Create a business object for workflow
    4) Create a workflow template for the business object.
    5) Modify the standard functionality of PO processing so that Purchase Proposal can be entered while creating a PO and the data from it will flow into the PO
    Is this approach correct or anything is missing?

    Hi,
    This approach is correct.
    Please pay attention on point 5 because you will be modify the standard functionality.
    Regards,
    Marcin Gajewski

  • Goods Receipt for Inbound HU - WS_DELIVERY_UPDATE

    Hi All,
    I have a requirement to automate transaction VL60p to do GR for
    an inbound delivery with HU.
    Since this is a SAP enjoy transaction I can't use BDC and there doesn't seem to
    be a BAPI for this.
    I plan to use WS_DELIVERY_UPDATE, filling tables VERKO and VERPO.
    Is my approach correct?
    If not can you point me in the right direction?
    Thanks,
    Miguel

    Anyone?
    I've also added IT_OBJECTS as one of the tables I'm passing.
    Its of type PGR_OBJECTS which is described as Objects for Partial Goods Receipt.
    However Posting is still not happening.
    Thanks,
    Miguel

  • Restrict navigation on the basis of value

    There is a supplier table. If the status of the supplier row is DRAFT or NEW, then only will I be able to see the details of it by clicking on a commandlink. For this, I had attached a property listener to get the value #{row.Status} and then wrote the logic code in the ActionListener property of the commandlink as below.
    public void moveToDetailPageBasedOnStatus(ActionEvent actionEvent) {
            DCBindingContainer bc = (DCBindingContainer)BindingContext.getCurrent().getCurrentBindingsEntry();
            Map map = bc.getParametersMap();
            String status = ((DCParameter)map.get("p_status")).getValue().toString();
            if (status.equals("DRAFT")) {
                FacesContext context = FacesContext.getCurrentInstance();
                context.getApplication().getNavigationHandler().handleNavigation(context, null, "create");
            } else {
                FacesContext context = FacesContext.getCurrentInstance();
                FacesMessage msg2 =
                    new FacesMessage(FacesMessage.SEVERITY_INFO, "", "User can see the details of an Issue in DRAFT Status only.");
                context.addMessage(null, msg2);
        }Then I read the comment by Frank Nimphius in the post below, saying that "+Why don't you use a method call activity or (even better) a router) if it is all about directing different users to different views? The approach of using HandleNavigation is not optimal+".
    FacesContext context = FacesContext.getCurrentInstance();
                context.getApplication().getNavigationHandler().handleNavigation(context, null, "create");
    http://www.adftips.com/2010/10/adf-ui-navigating-to-next-page.htmlSo then I am passing a parameter to a bounded taskflow where I am defining a pageFlowScope parameter, and then created a method call activity, set it as the default activity. The code is almost the same.
    public void checkIssueStatus() {
            String value = (String) ADFContext.getCurrent().getPageFlowScope().get("p_status");
            if ((value.equals("DRAFT")) || (value.equals("NEW"))) {
                FacesContext context = FacesContext.getCurrentInstance();
                context.getApplication().getNavigationHandler().handleNavigation(context, null, "checked");
            } else {
                FacesContext context = FacesContext.getCurrentInstance();
                FacesMessage msg2 =
                    new FacesMessage(FacesMessage.SEVERITY_INFO, "", "User can see the details of an Issue in DRAFT Status only.");
                context.addMessage(null, msg2);
        }The problem is in case the p_status is other than NEW or DRAFT, the message is getting displayed, but then detail page is also shown. I only want to display the message only and not navigate to the next page. How to resolve this issue?
    And the big question: Is my approach correct and optimal this time?

    No as you already know it the same approach you used before.
    In your task flow you use a router element as start activity wher you check the parameter using EL and depending on the outcome navigate to the view you like.
    <?xml version="1.0" encoding="UTF-8" ?>
    <adfc-config xmlns="http://xmlns.oracle.com/adf/controller" version="1.2">
      <task-flow-definition id="router-task-flow-definition">
        <default-activity>router1</default-activity>
        <router id="router1">
          <case id="__1">
            <expression>#{pageFlowScope.pStatus eq 'NEW'}</expression>
            <outcome>outcome1</outcome>
          </case>
          <case id="__2">
            <expression>#{pageFlowScope.pStatus eq 'INIT'}</expression>
            <outcome>outcome2</outcome>
          </case>
          <default-outcome>outcome3</default-outcome>
        </router>
        <view id="view1"></view>
        <view id="view2"></view>
        <view id="view3"></view>
        <control-flow-rule id="__3">
          <from-activity-id>router1</from-activity-id>
          <control-flow-case id="__4">
            <from-outcome>outcome1</from-outcome>
            <to-activity-id>view1</to-activity-id>
          </control-flow-case>
          <control-flow-case id="__5">
            <from-outcome>outcome2</from-outcome>
            <to-activity-id>view2</to-activity-id>
          </control-flow-case>
          <control-flow-case id="__6">
            <from-outcome>outcome3</from-outcome>
            <to-activity-id>view3</to-activity-id>
          </control-flow-case>
        </control-flow-rule>
        <use-page-fragments/>
      </task-flow-definition>
    </adfc-config>This way the navigation is handled by the controller without using the navigation handler.
    A full sample you can find here: http://tompeez.wordpress.com/2012/12/01/jdeveloper-11-1-1-5-0-use-router-to-create-new-row-or-edit-existing/
    Timo

Maybe you are looking for

  • Anyone Figure Out How To Edit Using Canon .mxf Files?

    my Canon xf105 .mxf files won't import into Adobe Premier Pro 6.0.3...  Keep getting a "generic error".... and I've tried everything! I've downloaded the Canon utilitiy, tried merging the clips, tried isolating the .mxf files, tried not touching the

  • Upgrading iM from 8.1.6 to 8.1.7

    I migrated one of our 8.1.6 databases to 8.1.7. Based on the migration documentation, InterMedia Text needs to be manually upgraded. Following Doc ID 120611.1, I ran s0801070 as SYS, then as CTXSYS, I ran the following: u081070.sql dr0typec.pkh dr0pk

  • Duplicating video and wasting storage space???

    Here is the scenerio: I have numerous old family movie tapes, but I will just talk about one. On this tape there are 6 different events (birthday, science fair, playing outside, etc). I have purchased an external 500GB hard drive to store footage dir

  • Loading child SWFs with TLF content (from Flash CS5.5) generates reference errors in FB4.6

    I am currently producing e-learning content with our custom AS3-Framework. We normally create content files in Flash CS5.5 with dynamic text fields, which are set at runtime from our Framework (AS3 framework in FB4.6). Now we are in the progress of l

  • CRS installation: Failure at final check of Oracle CRS stack.10

    Hello, I am trying to install Oracle RAC in 10GR2 to simulate a migration from 1024 to 11GR2. I am using VMWARE with two Linux CentOS 64b 6.2 and shared disks as raw devices. I got "Failure at final check of Oracle CRS stack.10" when running root.sh,