Double execution

Hi,
I'm having a application developed in Java as front end and oracle 9i as back end.In one of my process i'm calling a package to do some computaion.The problem is the main procedure in my package is executing twice, eventhough i'm calling it only once.This problem exists when i browse through IIS.The problem is not occuring when i browse through 8000 port.Please advice me if anybody have faced the same problem.
Thanks in advance
regards,
Dibu.

Oracle is a passive server platform in terms of client interaction.
It does not decide, "oh cripes, ISS is calling me I better run another copy of the client's request in a brand new session".
It does not care what TCP port you use.
It does not care what client language you use.
It does not care what client o/s you use.
Oracle simply sees a client network connection/session - and it services that. As simple as that.
If there are a problem with the number of client network connections created.. that is not Oracle's problem.
Fundamental client-server principles....

Similar Messages

  • Double execution of actionListener method, normal behavior

    Hi,
    I don't understand this behavior i have a commandButton in my JSP
    <h:commandButton value="testActionListener" actionListener="#{LoginBean.actionListener}" immediate="true" />When I click on this button, the associated code (my actionListener method in my bean) is executed two times.
    In my opinion if the scope of my bean is "request" the execution would have to be the constructor then my method actionListener.
    Why do this method is executed two times after the constructor ? Is it the normal behavior ?
    Thanks
    Edited by: Antony97 on Dec 13, 2008 8:03 AM

    hello
    try this link
    http://www.javabeat.net/tips/67-how-to-implement-actionlistener-factionlist.html
    initally i got the same problem
    write seperate bean and actionlistener. as the link suggest your problem is solved

  • Method double execution

    im trying to run my method.it works perfectly, but runs 2wice as much as i want it to. this cannot be fixed by didviding the number by two. I need one to execute the Method, maby im not calling it correctly, and one to execute in the loop
    Here is my execution file:
    import java.util.*;
    import java.lang.*;
    public class methodExecuter{
      public static void main (String[] args){
        Scanner sc = new Scanner (System.in);
        //ALARM
       System.out.println ("Enter number:");
       int num=sc.nextInt();
       *System.out.print (Alarm.Alarmed(num));* //PROBLEM LINE
    }And here is my method file:
    import java.awt.*;
    import java.lang.*;
    public class Alarm
      //class constructor - no info required
      public Alarm ()
      public static String Alarmed (int repeat){ 
       String alarm= "Alarm";
        if (repeat<1) {
          System.out.println ("Sorry, number has to be greater than one!");
        else {
          for (int i=0;i<=repeat;i++) {
    *System.out.println (alarm);* //PROBLEM LINE
        return alarm;
    }

    1) don't call Alarm.alarmed within a System.out.println(..). Rather just call the method itself.
    2) within the alarmed method, use a for loop, not a while loop since you know in advance how many times it should be repeated.

  • Double execution onbeforeunload after trying to close tab

    IE 11.0.9600
    I have script:
    public interface ICloseHandler
    string Message { get; set; }
    void Initialize();
    public class InBrowserCloseHandler : ICloseHandler
    private const string ScriptableObjectName = "InBrowserCloseHandler";
    #region ICloseHandler Members
    public void Initialize()
    HtmlPage.RegisterScriptableObject(ScriptableObjectName, this);
    string pluginName = HtmlPage.Plugin.Parent.Id;
    HtmlPage.Window.Eval(string.Format(
    @"window.onbeforeunload = function ()
    var slApp = document.getElementById('{0}').getElementsByTagName('object')[0];
    var result = slApp.Content.{1}.OnBeforeUnload();
    if(result.length > 0)
    return result;
    pluginName, ScriptableObjectName)
    public string Message { get; set; }
    public static bool StopActivity { get; set; }
    #endregion
    [ScriptableMember]
    public string OnBeforeUnload()
    if (!StopActivity)
    return Message;
    else
    StopActivity = false;
    return string.Empty;
    public class OutOfBrowserCloseHandler : ICloseHandler
    #region ICloseHandler Members
    public void Initialize()
    Application.Current.MainWindow.Closing +=
    (s, e) =>
    if (!StopActivity)
    MessageBoxResult boxResult = MessageBox.Show(
    Message,
    string.Empty,
    MessageBoxButton.OKCancel);
    if (boxResult == MessageBoxResult.Cancel)
    e.Cancel = true;
    else
    StopActivity = false;
    public string Message { get; set; }
    public static bool StopActivity { get; set; }
    #endregion
    public class PowerfullCloseHandler : ICloseHandler
    private readonly ICloseHandler _closeHandler;
    public PowerfullCloseHandler()
    if (!Application.Current.IsRunningOutOfBrowser)
    _closeHandler = new InBrowserCloseHandler();
    else
    _closeHandler = new OutOfBrowserCloseHandler();
    #region ICloseHandler Members
    public string Message
    get { return _closeHandler.Message; }
    set { _closeHandler.Message = value; }
    public static bool StopActivity
    get
    if (!Application.Current.IsRunningOutOfBrowser)
    return InBrowserCloseHandler.StopActivity;
    else
    return OutOfBrowserCloseHandler.StopActivity;
    set
    if (!Application.Current.IsRunningOutOfBrowser)
    InBrowserCloseHandler.StopActivity = value;
    else
    OutOfBrowserCloseHandler.StopActivity = value;
    public void Initialize()
    _closeHandler.Initialize();
    #endregion
    public partial class App : Application
    private MyProject.Helpers.Addons.ICloseHandler _handler;
    public App()
    InitializeComponent();
    _handler = new PowerfullCloseHandler();
    _handler.Initialize();
    _handler.Message = Strings.Message_WarningClosingWindow;
    Before downloading files on the web site I doing "StopActivity = true":
    MyProject.Helpers.Addons.PowerfullCloseHandler.StopActivity = true;
    HtmlPage.Window.Navigate(new Uri(String.Format(BLL.MyProjectAppHost + "/download.aspx?file={0}&filename={1}", new string[] { e.OutGUID, e.OutFileName })));
    that would not receive a warning.
    But when I try to close a tab, and I cancel the action, the next time I download the file I get an event "onbeforeunload" twice:
    @"window.onbeforeunload = function ()
    var slApp = document.getElementById('{0}').getElementsByTagName('object')[0];
    var result = slApp.Content.{1}.OnBeforeUnload();
    if(result.length > 0)
    return result;
    As a result during the download begin to receive warning.
    Google Chrome does not have such a problem. Google Chrome works great.

    Developer-specific resources would include:
    MSDN IE Development Forum (post such questions here instead)
    http://social.msdn.microsoft.com/Forums/ie/en-US/home?forum=iewebdevelopment
    Tip: Include a link to your web-site or test pages (if possible) when posting in Developer forums.
    IE Developer Resources
    http://www.modern.ie/en-us
    IE Developer Center - Key features by area
    http://msdn.microsoft.com/en-US/ie/aa740473
    IE11 Guide for Developers
    http://msdn.microsoft.com/en-us/library/ie/bg182636(v=vs.85).aspx
    Scan for common coding problems
    http://msdn.microsoft.com/en-US/ie/
    ~Robear Dyer (PA Bear) MS MVP-Windows Client since 2002 Disclaimer: MS MVPs neither represent nor work for Microsoft

  • Double Execution needed

    Hi,
    When an approver executes in BWSP, a request (workitem) for training attendance , they are receiving a further dialog screen requesting the user to execute again.
    The intermediate screen has an Execute button on it and is in transaction BWWI_EXECUTE.
    It should be going straight from the inital execute of the workitem into the screen showing 'Booking Approval'.
    Any ideas??????
    Noel

    Hi
    Sorry to dig up this old thread but I'm currently facing the same problem with my SRM implementation.
    The BWWI_EXECUTE is supposed to display details on a workitem (missing deadline) but instead of leading to the correct screen, it shows the BWWI_EXECUTE intermediate screen. Only after clicking on the "Execute" button do I get my workitem display.
    I've tried doing Complete Service publishing from SE80 successfully but to no avail (screen shows: Saved/Partly-published status -- does this mean an error somewhere?).
    The system is currently using SRM 5.00, integrated ITS with SAP 6.4.

  • Stored Proc running twice using DBMS_Scheduler

    Hello all,
    I have a vb front end that calls a main stored proc which submits scheduler jobs to execute several stored procs asynchronously. Everything is working, except the part that the several stored procs are running twice. In the troubleshooting, I have eliminated the front end from being the culprit and the stored procs themselves. Essentially, when I call the stored proc using dbms_scheduler.create_job, it runs twice, even manually. I am about at wits end trying to figure out why: Using Oracle 11gR2
    I started off setting up the programs
    begin
    --create program
    dbms_scheduler.create_program
    ( program_name => 'prog_name'
    ,program_type => 'STORED_PROCEDURE'
    ,program_action => 'usp_sub_proc_1'
    ,number_of_arguments => 8
    ,enabled => FALSE
    dbms_scheduler.DEFINE_PROGRAM_ARGUMENT
    ( program_name=> 'prog_name'
    ,argument_position=>1
    ,argument_name => 'name'
    ,argument_type=>'VARCHAR2'
    /*the remaining 7 arguments are in code but not display for space reasons*/
    dbms_scheduler.enable('prog_name');
    end;Then the main stored proc executes this code:
    declare v_job_name varchar2(100);
        v_1 varchar(50) := 'All';
        v_2 varchar(50) := 'All';
        v_3 varchar(50) := 'All';
        v_4 varchar(50) := 'All';
        v_5 varchar(50) := 'TEST';
        i_6 integer := 1;
        v_7 varchar(50) := 'TEST_NE';
        ts_8 timestamp := current_timestamp;
    begin
        v_job_name := 'uj_dmo_1';
    dbms_scheduler.create_job (v_job_name
                                            ,program_name => 'prog_name'
                                            ,job_class => 'UCLASS_1'
                                            ,auto_drop => TRUE
    --set parameters
    dbms_scheduler.set_job_argument_value(v_job_name,1, v_1);
    dbms_scheduler.set_job_argument_value(v_job_name,2, v_2);
    dbms_scheduler.set_job_argument_value(v_job_name,3, v_3);
    dbms_scheduler.set_job_argument_value(v_job_name,4, v_4);
    dbms_scheduler.set_job_argument_value(v_job_name,5, v_5);
    dbms_scheduler.set_job_argument_value(v_job_name,6, to_char(i_6));
    dbms_scheduler.set_job_argument_value(v_job_name,7, v_7);
    dbms_scheduler.set_job_argument_value(v_job_name ,8, to_char(ts_8));
    --enable job
    dbms_scheduler.enable(v_job_name);
    --execute job
    dbms_scheduler.run_job(job_name => v_job_name , use_current_session => FALSE);
    end;
    ...And this is where I get the double execution of the job, but I am just not seeing it in my syntax, dba_scheduler_jobs, logging, etc. Any help is greatly appreciated, thanks!!

    Well apparently I will not win any Captain Obvious awards;
    With 34MCA2K2's response with "what doesn't work" for some reason turned the light on. After some more testing here is what I found.
    This code works as expected :
    Exhibit A
    begin
    dbms_scheduler.create_job (job_name =>'TESTER'
                                   ,job_type => 'PLSQL_BLOCK'
                                   ,job_action => 'declare test1 integer := 1; begin test1 := test1 + 5; end;'
                                   ,auto_drop => True
       /*dbms_scheduler.enable('TESTER');   */
       dbms_scheduler.run_job(job_name => 'TESTER', use_current_session =>FALSE);   
    end;As does this:
    Exhibit B
    begin
    dbms_scheduler.create_job (job_name =>'TESTER'
                                   ,job_type => 'PLSQL_BLOCK'
                                   ,job_action => 'declare test1 integer := 1; begin test1 := test1 + 5; end;'
                                   ,auto_drop => True
       dbms_scheduler.enable('TESTER');  
      /*dbms_scheduler.run_job(job_name => 'TESTER', use_current_session =>FALSE);    */
    end;Exhibit A will create the job and is visible in the schedulerjobs view, and the RUN_JOB will execute it even when not enabled, but the pl/sql will not drop the job.
    Exhibit B will create the job and once enabled, executes the job and then drops from schedulerjobs view.
    Therefore, my desired results for running the jobs once asynchronously and dropping immediately is....
    begin
        v_job_name := 'uj_dmo_1';
    dbms_scheduler.create_job (v_job_name
                                            ,program_name => 'prog_name'
                                            ,job_class => 'UCLASS_1'
                                            ,auto_drop => TRUE
    --set parameters
    dbms_scheduler.set_job_argument_value(v_job_name,1, v_1);
    dbms_scheduler.set_job_argument_value(v_job_name,2, v_2);
    dbms_scheduler.set_job_argument_value(v_job_name,3, v_3);
    dbms_scheduler.set_job_argument_value(v_job_name,4, v_4);
    dbms_scheduler.set_job_argument_value(v_job_name,5, v_5);
    dbms_scheduler.set_job_argument_value(v_job_name,6, to_char(i_6));
    dbms_scheduler.set_job_argument_value(v_job_name,7, v_7);
    dbms_scheduler.set_job_argument_value(v_job_name ,8, to_char(ts_8));
    /*enable job*/
    dbms_scheduler.enable(v_job_name);
    /*execute job (Do not execute the code below, it will lead to multiple executions)
    dbms_scheduler.run_job(job_name => v_job_name , use_current_session => FALSE); */
    end;

  • Inserting an entry in to a partition in a particular node

    I have the below scenario
    (1) Initiate Cache A with a Map Trigger
    (2) Initiate Cache B
    (3) Insert Entry A into Cache A. This will call the Map Trigger
    (4) I am inserting Entry B into Cache B in the Map Trigger
    I want to keep Entry A and Entry B in the same node. Data affinity would not work on this scenario since Cache A and Cache B are from two different cache services.
    Cache A and Cache B are distributed caches.
    Is there a way I can specify the Entry B to be inserted to the same node that the trigger is running ?
    Thanks
    Dasun.

    Dasun.Weerasinghe wrote:
    I have the below scenario
    (1) Initiate Cache A with a Map Trigger
    (2) Initiate Cache B
    (3) Insert Entry A into Cache A. This will call the Map Trigger
    (4) I am inserting Entry B into Cache B in the Map Trigger
    I want to keep Entry A and Entry B in the same node. Data affinity would not work on this scenario since Cache A and Cache B are from two different cache services.
    Cache A and Cache B are distributed caches.
    Is there a way I can specify the Entry B to be inserted to the same node that the trigger is running ?
    Thanks
    Dasun.Hi Dasun,
    there can be no hard guarantees provided by Coherence to this problem, as Coherence cannot guarantee that you configured it in a way that both cache services are running on exactly the same nodes in storage-enabled way, nor that you configured the same partition count. Also, different lifecycle-related events can happen to the service which could make only one of the services die on a certain node...
    There is a best-effort PartitionAssignmentStrategy implementation (called com.tangosol.net.partition.MirroringAssignmentStrategy) which assumes that configuration is appropriate but even that cannot guarantee that the partition lifecycles are synchronized together, therefore even with that the trigger should not go to the backing map of the other service. Also, if service B is overwhelmed by operations, it would also make service A run slow if service A invoked service B from within the trigger.
    The only safe approach is to use MirroringAssignmentStrategy and offload operations from the trigger to another (single) thread which would carry out operations on service B, which would hopefully be local operations but even if not, it would not break. The usual idempotency concerns of course apply as operations on service A may be re-executed leading to double execution on the trigger A.
    Best regards,
    Robert

  • Message Status as scheduled..

    Hello All,
    I configured scenario which use SOAP as sender and XI Adpter as receiver.
    It is syncrnous call. I can see different message status...
    intiatially it came as black triangle --> scheduled for outbound processing.
    Now, It is showing Green flag which is just sceduled......
    What is wrong and how to troubleshoot further....
    Thanks,
    Srini

    See below SAP Note 1118297:
    Liang
    =====Note 1118297===========
    Symptom
    You are using the Exchange Infrastructure and you are processing asynchronous messages. These messages are scheduled in queues that are no longer processed. The inbound queue of the qRFC Monitor (transaction SMQ2), for example, contains entries with the status "Running". However, you recognize in the process overview (transaction SM50) that there is no active work process.
    Other terms
    QRFC, SXMS_ASYNC_EXEC, inbound, running, resource
    Reason and Prerequisites
    You are using the Exchange Infrastructure and you are processing asynchronous messages. These message are scheduled in the QRFC and are only processed slowly or not at all.
    Solution
    The asynchronous messages are scheduled in the queues.
    Check the following points of your middleware.
    Queue configuration:
    If the QRFC Inbound Scheduler is configured, see Note
    369007.qRFC: Configuration for the QIN Scheduler
    Are the XI queues registered and active? In transaction SXMB_ADMIN->Administration->Manage Queues, you can register the queues for the XI runtime.
    If the resources are configured for TRFC/QRFC, note the common usage of system resources. For further details, see Notes
    74141 Resource management for tRFC and aRFC
    527481 tRFC or qRFC calls are not processed
    For the qRFC resources see Note
    1051445 qRFC scheduler does not use all available
            resources
    For further information see Unit 3.1 of the "SAP Exchange Infrastructure Tuning Guide", which you can download from the marketplace.
    In the XI configuration you can set locks for the processing of XI messages. Note that for many parallel queues there are also sufficient resources for lock objects. At least one enqueue entry is required for each queue, depending on the configuration. For further information see Notes
    552289 FAQ: Lock management R/3
    654328 Enqueue: System log message GE9
    Queue status:
    Transaction SMQ2 contains queue entries with the following status
    "Running"
    If the J2EE engine is downloaded in production mode with the mapping engine or recipient Adapter Framework, processing QRFC entries may not recognize this and remain in the "Running" status. A symptom can be, for example, that the RFC destination "AI_RUNTIME_JCOSERVER" is no longer registered. You can find this information in the Gateway Monitor (transaction SMGW).
    Otherwise, the system may reach this status if the application server is downloaded in a controlled manner, and long-running processes can no longer terminate.
    The system no longer processes any message. In transaction SMQR, you can activate the queues manually again. You can also schedule the report RSQIWKEX, which automatically restarts the queues. For further information see Notes
    620633 Status RUNNING in SMQ2 with XI queues
    864333 RSQOWKEX & RSQIWKEX start Running(Executed) queues
    "Sysfail"
    A queue entry with the status "Sysfail" can have two possible causes.
    1. Further processing cannot be carried out on the process for technical reasons (for example, the program terminates with a dump or a system (mapping or recipient system cannot be reached).
    2. Queue processing should be stopped, as the message cannot be processed, due to a configuration error. Processing is not permitted for the subsequent messages, as the queue sequence displays the message sequence. You can use the report RSXMB_RESTART_MESSAGES to restart the message processing again. See Note
    813029 Automatic processing of failed XI messages
    XI-QRFC and IDoc-TRFC processing:
    Note that when you use the IDoc adapter this uses the RFC/TRFC resources, which are used together with the QRFC resources. This means an intensive usage of the TRFC can influence the performance of the XI queue message processing. When possible always use the package processing of the IDoc adapter (transport of several IDocs with an RFC call to the recipient system):
    1. Package created from an IDoc XML payload with several IDocs
    2. Packaging, package created with IDoc package filter
    3. Package created with message packaging (queue message package)
    (as of NW2004s)
    For more information about acknowledgements, see also Note:
    1111968 IDoc adapter: Parallel processing of acknowledgements
    to reduce the number of acknowledgement queues.
    Tuning Balancing parameter :
    The BALANCING parameter activates the entries for the parameters B_EO_IN_PARALLEL_SENDER, B_EO_OUT_PARALLEL and B_EO_IN_PARALLEL.
    They are used if the number of parallel queues is changed and the messages are distributed to the new queues. A balancing procedure is already used in the Standard System to distribute the messages equally to the queues. If the Balancing parameter is continually active, it has a negative impact on performance (as queue entries are constantly distributed between the queues and this means they cannot be processed). This can cause problems during queue processing.
    Its use in the Tuning Guide is not described clearly.
    Oracle database:
    When using an Oracle database see Note
    742950 Performance affected on Oracle DB with supplement 11
    if you discover performance problems when processing queue entries. This improves the scheduling process only, and not the processing time of a queue entry.
    See also Note
    1020260 Delivering Oracle statistics
    EO (exactly once) handling of XI-Messages when you restart QRFC entries:
    The XI log guarantees the uniqueness of the transfer between the client and server. A restart does not trigger a double execution of messages. Note that a synchronization of the reorganization between client and server is guaranteed. The EO recognition is executed with the message ID. The retention time determines the amount of time this information is retained in the system for.
    The individual messages are saved to the database in the system with a key from the message ID and version. This prevents a parallel execution of the same message. A queue entry can be executed in parallel using the report RSQIWKEX or by carrying out a manual start from the queue. We recommend to set the parameter LOCK_MESSAGE (default value is active "1") of the category "RUNTIME" in the XI configuration to the value '0'. More information is contained in Note
    1058915 Outbound queue remains in status 'SYSFAIL'
    =====End of Note 1118297===========

  • Per-user cronjob (somewhat automatically)

    Hi all,
    probably some of you have a solution for this already in place:
    I have switched from fetchmail to mpop and miss the daemon feature. When I tried to launch mpop with the following mpop file in /etc/cron.d, it didn't work (I think it didn't download any messages, probably due to the fact that my mpoprc/mailbox definition is in my personal home dir. But even with su it didn't work.):
    */10 * * * * su wagner -c /usr/bin/mpop -qa
    With the crontab command, however I can insert it in my personal crontab and it's working fine. Now should I put such a crontab command in my .profile or what?
    Second, I would like to automatically enable/disable that cronjob dependng on whether I'm on-line or not, and
    thirdly, I would like to be able to call mpop manually (I have a keybinding in mutt for that), but that should be a nop when the cronjob is just executing.
    Thanks for any hints,
    Andreas

    First, the cron daemon executes job from the crontabs of any user: so just edit the crontab of the user with 'crontab -e -u <user>' and the cron daemon will do everything.
    For the verification of the internet connection, you can use a script which pings some hosts and executes the action only if the ping is successful:
    #!/bin/bash
    IP1=0.0.0.0 # enter an IP address here
    IP2=0.0.0.0 # enter an IP address here
    IP3=0.0.0.0 # enter an IP address here
    IP4=0.0.0.0. # enter an IP address here
    COMMAND=$1
    STATUS=Up
    ping -c1 $IP1
    if [ $? != 0 ]; then
    ping -c1 $IP2
    if [ $? != 0 ]; then
    ping -c1 $IP3
    if [ $? != 0 ]; then
    ping -c1 $IP4
    if [ $? != 0 ]; then
    STATUS=Down
    echo "down"
    fi
    fi
    fi
    fi
    if [ $STATUS = Up ]; then
    echo "up"
    exec $COMMAND
    fi
    Call this script e.g. 'netex', and then use 'netex <yourdesiredaction>' as a cron job definition.
    On the first point I do not know, but many mail fetchers use a lock to prevent a double execution (e.g. fdm). I do not know mpop so I do not know how to get the same result.

  • 11g new feature "Partition pruning based on bloom filtering" is what?

    While idly reading the Oracle 11g Database New Features I stumbled upon the following - BEGIN QUOTE:
    1.11.1.2 Enhanced Partition Pruning Capabilities
    Partition pruning now uses bloom filtering instead of subquery pruning. While subquery pruning was activated on a cost-based decision and consumed internal (recursive) resources, pruning based on bloom filtering is activated all the time without consuming additional resources.
    END QUOTE
    I haven't found any other references to bloom filtering in the manuals, and very few via google and MetaLink. So I am left wondering what the above paragraph actually means?
    Best regards,
    Hans Henrik Krohn

    Hi Hans
    The problem of subquery pruning is that part of the SQL statement is executed twice. Therefore, a cost-based decision is necessary to decide if it makes to do it or not...
    To avoid this double execution they introduced join-filter pruning (which takes advantage of a bloom filter). Since this new method has a very small overhead, it makes always sense to use it if pruning can be used. With it you will see execution plans like the following one.
    | Operation                           | Name    | Pstart| Pstop |
    | HASH JOIN                           |         |       |       |
    |  PART JOIN FILTER CREATE            | :BF0000 |       |       |
    |   TABLE ACCESS BY GLOBAL INDEX ROWID| T       | ROWID | ROWID |
    |    INDEX UNIQUE SCAN                | T_PK    |       |       |
    |  PARTITION RANGE JOIN-FILTER        |         |:BF0000|:BF0000|
    |   TABLE ACCESS FULL                 | T       |:BF0000|:BF0000|
    -----------------------------------------------------------------HTH
    Chris

  • Query result cache with functions

    Hi all,
    one of my colleagues has found a little bit weird behavior of a query result cache. He has set result_cache_mode = 'FORCE' (but it can be reproduced with a result_cache hint too) and suddenly functions called from the query get executed twice (for the first time) .
    An easy example:
    alter session set result_cache_mode = 'FORCE';
    create sequence test_seq;
    create or replace function test_f(i number)
    return number
    is                  
    begin
      dbms_output.put_line('TEST_F executed');
      --autonomous transaction or package variable can be used too
      return test_seq.nextval;
    end;
    prompt First call
    select test_f(1) from dual;
    prompt Second call
    select test_f(1) from dual;
    drop sequence test_seq;
    drop function test_f;
    First call
    TEST_F(1)
             2
    TEST_F executed
    TEST_F executed
    Second call
    TEST_F(1)
             1
    As you can see - for the first time the function is executed twice and return the value from the second execution. When I execute the query again it returns the value from the first execution... but it doesn't matter, problem is in the double execution. Our developers used to send emails via select (it's easier for them):
    select send_mail(...) from dual;
    ... and now the customers complains that they get emails twice
    And now the question - is there any way, hot to get rid of this behavior (without changing the parameter back or rewriting code)? I thought that the result cache is automatically disabled for non-deterministic functions...or is this an expected behavior?
    Thanks,
    Ivan

    Interesting.. you are right:
    SELECT /*+ RESULT_CACHE */ 'dog' FROM DUAL;
    And at the second execution:
    | Id  | Operation        | Name                       | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT |                            |     1 |     2   (0)| 00:00:01 |
    |   1 |  RESULT CACHE    | cc5k01xyqz3ypf9t0j28r5gtd1 |       |            |          |
    |   2 |   FAST DUAL      |                            |     1 |     2   (0)| 00:00:01 |
    Hmmm..

  • Calling a workflow from a program

    Hello All,
    I am calling a workflow using FM 'SAP_WAPI_START_WORKFLOW' from se38 abap program.
    My workflow consists of simple user decision to approve and reject.
    From my t.code SWDD if I execute the workflow, it will take me to test workflow screen and there again I have to execute once more to see the mail with approve and reject button in my inbox of bussiness workplace.
    But when I call the worflow using the above FM it does not work.
          CALL FUNCTION 'SAP_WAPI_START_WORKFLOW'
              EXPORTING
                task                = 'WS99000093'
               language         = sy-langu
              IMPORTING
               return_code    = p_subrc
              workitem_id     = p_workitem.
    It doesn't get triggerred I guess. Is there something to do with the double execution as above which takes place when we do it manually ( one normal execution of workflow and another test workflow).
    Help Appreciated.
    Regards,
    Mac

    cross-post, please ask each question only once, this one will be locked.
    Thomas

  • Xcode executable execution working directory on double-click

    I have a fairly simple C++ code that I made with Xcode 3.1. It compiles fine and works if I run it from a terminal. But, if I double-click the executable, it launches and is looking for some supporting, local files at my "Macintosh HD" directory level instead of in the folder I launched it from (output files are also placed at that directory level). All file paths in the code are relative.
    Is this a compilation issue or option that I can change in Xcode? How do I make it so I can also just double-click the executable?
    Thanks,
    John

    Double clicking actually executes Terminal with root as current directory which then runs your C code. Change your code to look at the path in arg and use it to access the supporting files.

  • Is there a Data Execution Prevention compatable version of iTunes?  I have reinstalled iTunes 10.5 at least 10 times, and after it installs, it will not open because of DEP.

    I have tried all the advice I have seen on Apple Support discussion boards for the past 2 days, and nothing works.  Disabling DEP is not possible, regardless of what the Windows Support discussions tell you.  In the meantime, I have absolutely no access to iTunes, or even the ability to update my iPhone or iPod.  The only feasable solution is the possibility of a version of iTunes that is compatable with Data Execution Prevention settings. 
    In the past 2 days, I have:  Removed iTunes and related components from the Control Panel (numerous times)  Per the instructions on Apple Service discussions pages, I did it in this order.  1. iTunes  2. QuickTime  3. Apple Software Update  4. Apple Mobile Device Support 5. Bonjour ^. Apple Application Support.  After all that, I reinstalled iTunes. 
    Everything looks fine during installation, with no error messages.  At the end, it says everything was successfully installed.  However, when the installation tool closes, iTunes will only partially open a window, but will stay blank.  Then I get the message that iTunes has stopped working and that Windows has shut it down.  Then I get notified that DEP has caused iTunes to shut down. 
    Can someone please help?  Or, is there a DEP compatible version of iTunes? 
    I would appreciate any help.  Thanks!!

    Polydorus,
    Thank you for your kind reply.  During the many times I uninstalled iTunes, and all the other Apple programs, I only used the Uninstall Programs in Windows Control Panel.  To make a long story short, I found a solution that works for me, but it is still not a complete soluton.
    Here is what I did:
    I used Uninstall Program in the Windows Control Panel to uninstall everything IN THIS ORDER
    1. iTunes
    2. QuickTime
    3. Apple Software Update
    4. Apple Mobile Device Support
    5. Bonjour
    6. Apple Application Support
    Then I went to C:\Program Files and looked for any iTunes or Apple program listed there and deleted it.
    I have 64-bit, so I then went to C:\Program files (x86) and looked for any iTunes, QuickTime, Bonjour or any other file or folder that had Apple or any Apple program in the name and deleted it.
    I went to C:\Windows|SysWOW64\Quicktime  and C:\Windows\SysWOW64\QuicktimeVR and deleted them
    Go back to START, and open the "C" drive.  Open the USERS folder.  Open the folder with your username.  Open the AppData folder.  Then double-click on the LOCAL folder to open it.  If you see any files or folders there that show any Apple program or file, delete those files or folders.  Then go to the REMOTE folder and do the same.  If there are any other users on this computer, go to each individual user and do the same thing in each LOCAL and REMOTE folder.  Restart your computer.
    Go to http://www.apple.com/itunes/download/ This is the page where you will actually download iTunes.  Scroll down the page to the section under "Windows Software"  that says "64-bit editions of Windows Vista or Windows 7 require the iTunes 64-bit installer".  Click on that line to get the installer.  It will take you to another download window.
    Scroll to the bottom of the page to the message that says "Download for iTunes 10.4.1 for Windows (64-bit) here: iTunes for Windows 64-bit."  Click there to get the download.
    This is not iTunes 10.5, so you will not have access to "The Cloud", but it is at least functional until Apple actually comes out with a version that will not activate the DEP message.  There was no combination of uninstalling and reinstalling, with or without Quicktime, with 10.5 that did not cause problems with Data Execution Prevention problems that I found.
    I also used Firefox for the downloading instead of Internet Explorer.  It just seemed to function better that way.
    I hope this is helpful to someone.  It's just what worked for me.   

  • Execution of Reports phase error during SAP EHP4 upgrade with EHPi

    I started getting an error message in the "Execution of reports after put" phase
    (within the Downtime phase) during the SAP EHP4 upgrade with Enhancement
    Package Installer.
    ***** LIST OF ERRORS
    AND RETURN CODES *****
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~XPRA ERRORS and RETURN CODE in SAPRB70104.OPQ
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~1AETR012XProgram terminated (job: "RDDEXECL", no.: "09064300") Long
    text: Cause Program "&V#&", which was started in the background, was
    terminated abnormally. System Response The system created a job log for
    the terminated program. What to do Proceed as follows: Log onto the
    system in which the program was executed. The system is specified at the
    beginning of the transport log. Then call transaction SM37 to display the
    job log. Enter "&V#&" as job name and "DDIC" as user name. Restrict the
    starting date to the starting date specified in the transport log. For
    the other selection criteria, select only jobs with the status
    "cancelled". Then press <LS>Execute</>. A list of the jobs satisfying the
    selection criteria is displayed. You can display the log by pressing
    <LS>Job log</>. If the list contains several jobs, you can select the job
    with the ID " &V#&" with <LS>Display</> -> <LS>Job details</> or define
    further details for the selection criteria in the initial screen of
    transaction SM37. If the ABAP processor reports cancellation,
    double-clicking on the corresponding message in the job log branches to
    the display of the corresponding short dump. 1AEPU320 See job
    log"RDDEXECL""09064300""OPQ" 1 ETP111 exit code : "12" >>> PLEASE READTHE REPORT DOCUMENTATION OF THE REPORTS MENTIONED ABOVE <<< XPRAs are
    application reports that run at the end of an upgrade.
    Most XPRA reportshave a report documentation that explains what the report does and how
    errors can be corrected. Call transaction se38 in the SAP system, enter
    the report name, select 'Documentation' and click the 'Display' button.
    >>> The problematic XPRAs are mentioned in messages of type PU132 above
    <<<
    I tried to follow the instructions in note 1269960, but the enhancement spot
    CLASSIFICATION_TOOL was already active. I activated it again and reran the phase
    but got the same error. Also, I couldn't implement the note using SNOTE because it is
    the middle of the upgrade process.
    I also found the following long text in ST22
    "Internal error during syntax check: ("//bas/710_REL/src/krn/gen/scsymb.c#11"
    Processing had to be terminated because an internal error
    occurred when generating the ABAP/4 program
    "CL_CLS_BADI_CHARACTERIZATION==CP".
    The internal system error cannot be fixed by ABAP means only."
    I am on a 64 bit System with CentOS Linux. 8 GB RAM, 250 GB HD with 20 GB free space.
    So far, I could not find any information on this. Any help would be greatly appreciated!
    Thanks,
    Victor

    Hi Victor,
    Go to SM37 and put in the username as DDIC and see the job log.
    Also check  Sm50 and see whether you have BTC processes available.
    Last but not the least check your filesystem space  usage too.
    Gerard

Maybe you are looking for

  • Asset disposal

    Hi Experts I havea major issue. I guess this is technical and hope to get your help I need to know how I can get rid of an asset that I am disposing of when the asset was not loaded in a sub-ledger ???? Please advice ASAP with T codes and se up? Win

  • How to read function module from other servers.

    Hi Experts I am developing a smartform fromwhich i need to call a function module and read data from a different server which is RFC enabled function module. How can i call a function module and read data from different server. Please explain me the

  • Communication Infotype

    Hello All, I recently joined in phase two of an HR implementation for a client of ours. I am tasked with the implementation and configuration of XSS. I am facing an issue concerning creating a relationship between a user and an employee. Apparently t

  • Black and magenta cartridges showing up empty on printer

    Okay..my printer is a dinosaur. HP 6180 Photosmart.  Last night I moved it as I was cleaning. Today when I went to print, the screen of my laptop said that the black and magenta cartridges were empty. I broke up my day, went out and bought and instal

  • Bank Account Number determination in DMEE

    Dear Gurus, We are generating Payment file through DMEE Format tree, Every things work fine except Bank Account Number, We have 2 Banks keys (A&B) and Account number assigned in Vendor Master Bank details, When i select Bank A in F110 It pull out the