External Task - prepared and commit method

Hey guys,
I have a problem with External Task. I don't know how make a prepared and commit method for use with J2EE Application Server.
Anyone have an example?
tks

I am having some issue with the Externhal Task prepare and commit method,
Actually I am ponting the interactive activity to external Task and in the external task i am getting instanceID , participantId and activity as query paramenters.Using those query paramemeters i am initalizing the ProcessServiceSession and then making call to prepareActivate Method as below.
I am getting query string instnaceId value like " %2FScorecardChallenge%23Default-1.0%2F4%2F0 " .
And I tried to call the IntsnaceID method InstanceId.getProcessId(instanceId), but this getProcessId method gives Null value.
Is there any idea why i am getting the Null processID value for the instanceIdString " %2FScorecardChallenge%23Default-1.0%2F4%2F0 ".
My sampel code looks like below ....
String instanceId = this.getRequest().getParameter("instanceId");
               String activity = this.getRequest().getParameter("activity");
               String participantId = this.getRequest().getParameter("participantId");
               this.getRequest().setAttribute("activity",activity);
               this.getRequest().setAttribute("isFromBpm","true");
               try
                    System.out.println("########### BPM params" + instanceId + " : " + activity + " : " +participantId);
                    ProcessServiceSession bpmSession = ConnectPAPI.createSession(participantId,this.getRequest().getRemoteHost());
               String pid=InstanceId.getProcessId(instanceId);
               System.out.println("Process ID value "+ pid);
               System.out.println("Instance ID value "+ InstanceId.getInstanceId(instanceId));
               System.out.println("Instance ID value "+ InstanceId.getInstanceIn(instanceId));
                    if(bpmSession != null)
                    {  try{
                                                                           Arguments args = bpmSession.activityPrepare(activity,instanceId,Arguments.create());
                                   Map argument =(Map) args.getArguments();
                                   Iterator it = argument.entrySet().iterator();
                                   while (it.hasNext()) {
                                   Map.Entry pairs = (Map.Entry)it.next();
                                   String key = (String)pairs.getKey();
                                   Object value = pairs.getValue();
                                   System.out.println("########### BPM args" + key + " : " + value + " : " );
But my problem here is I am getting null pointer exception like below ( I found that null pointer exception is coming because processID ivelue is null).
Stack Trace is output :
Process ID value null
Instance ID value %2FScorecardChallenge%23Default-1.0%2F4%2F0
Instance ID value -1
bpmSesssion instance details abstract class fuego.papi.ProcessServiceSesion
bpmSesssion instance is not null
java.lang.NullPointerException
     at fuego.directory.DirDeployedProcess.isConsolidatedId(DirDeployedProcess.java:114)
     at fuego.papi.impl.ProcessServiceSessionImpl.getProcessControl(ProcessServiceSessionImpl.java:2328)
     at fuego.papi.impl.ProcessServiceSessionImpl.processGetInstance(ProcessServiceSessionImpl.java:1926)
     at fuego.papi.impl.ProcessServiceSessionImpl.activityPrepare(ProcessServiceSessionImpl.java:1128)
     at com.rollsroyce.gsp.poc.samplePOC.SamplePOCController.begin(SamplePOCController.java:57)
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
     at java.lang.reflect.Method.invoke(Method.java:585)
     at org.apache.beehive.netui.pageflow.FlowController.invokeActionMethod(FlowController.java:879)
     at org.apache.beehive.netui.pageflow.FlowController.getActionMethodForward(FlowController.java:809)
     at org.apache.beehive.netui.pageflow.FlowController.internalExecute(FlowController.java:478)
     at org.apache.beehive.netui.pageflow.PageFlowController.internalExecute(PageFlowController.java:306)
     at org.apache.beehive.netui.pageflow.FlowController.execute(FlowController.java:336)
     at org.apache.beehive.netui.pageflow.internal.FlowControllerAction.execute(FlowControllerAction.java:52)
     at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:431)
     at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.access$201(PageFlowRequestProcessor.java:97)
     at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor$ActionRunner.execute(PageFlowRequestProcessor.java:2044)
     at org.apache.beehive.netui.pageflow.interceptor.action.internal.ActionInterceptors$WrapActionInterceptorChain.continueChain(ActionInterceptors.java:64)
     at org.apache.beehive.netui.pageflow.interceptor.action.ActionInterceptor.wrapAction(ActionInterceptor.java:184)
     at org.apache.beehive.netui.pageflow.interceptor.action.internal.ActionInterceptors$WrapActionInterceptorChain.invoke(ActionInterceptors.java:50)
     at org.apache.beehive.netui.pageflow.interceptor.action.internal.ActionInterceptors$WrapActionInterceptorChain.continueChain(ActionInterceptors.java:58)
     at org.apache.beehive.netui.pageflow.interceptor.action.internal.ActionInterceptors.wrapAction(ActionInterceptors.java:87)
     at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.processActionPerform(PageFlowRequestProcessor.java:2116)
     at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:236)
     at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.processInternal(PageFlowRequestProcessor.java:556)
     at org.apache.beehive.netui.pageflow.PageFlowRequestProcessor.process(PageFlowRequestProcessor.java:853)
     at org.apache.beehive.netui.pageflow.AutoRegisterActionServlet.process(AutoRegisterActionServlet.java:631)
     at org.apache.beehive.netui.pageflow.PageFlowActionServlet.process(PageFlowActionServlet.java:158)
     at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:414)
Please help me , is any knows the problem.........................................

Similar Messages

  • External task - prepare method - decode-encode

    In the prepare method, I write code refer to thread:
    How to invoke a java application in the middle of the process
    Using Papi to operate with process
    How to invoke a java application in the middle of the process
    If the error is caused by decoding and encodeing the URL, please tell me how to fix this problem.
    Thanks a lot.
    The code snippet as follows
    -------------------code-----------------------------------------------
    package getinstance;
    import fuego.lang.DynamicObject;
    import fuego.papi.Arguments;
    import fuego.papi.CommunicationException;
    import fuego.papi.InstanceInfo;
    import fuego.papi.ProcessService;
    import fuego.papi.ProcessServiceSession;
    import fuego.papi.OperationException;
    import java.util.Iterator;
    import java.util.Map;
    import java.util.Properties;
    public class prepare {
    public prepare() {
    public static void main(String[] args) {
    prepare prepare = new prepare();
    Properties configuration = new Properties();
    configuration.setProperty(ProcessService.DIRECTORY_ID, "default");
    configuration.setProperty(ProcessService.DIRECTORY_PROPERTIES_FILE, "D:\\BPM_HOME_standalone\\webapps\\papiws\\WEB-INF\\directory.xml");
    configuration.setProperty(ProcessService.WORKING_FOLDER, "/tmp");
    System.out.println("1111111111111");
    Arguments argumentsResult = Arguments.create();
    String taskIn = "0";
    //String activity = "/InvokeJavaProcess#Default-1.0/InteractiveJava"; //both are fine
    String activity = "InteractiveJava"; //both are fine
    String instanceId = "/InvokeJavaProcess#Default-1.0/203/0";
    try {
    ProcessService processService = ProcessService.create(configuration);
    ProcessServiceSession session = processService.createSession("jcooper", "welcome1", "host");
    System.out.println("33333 instanceID="+instanceId);
    argumentsResult = session.activityPrepare(activity, instanceId, Arguments.create()); // error starts from here
    System.out.println("444444444");
    DynamicObject obj = argumentsResult.getDynamicObject();
    Map fieldMaps = obj.asFieldsMap();
    for (Iterator<String> iterator = fieldMaps.keySet().iterator(); iterator.hasNext();) {
    String key = iterator.next();
    System.out.println("key : "+ key + "value" +obj.getField(key));
    DynamicObject obj2 = argumentsResult.getDynamicObject();
    Map fieldMaps2 = obj2.asFieldsMap();
    for (Iterator<String> it = fieldMaps2.keySet().iterator(); it.hasNext();) {
    System.out.println("key : "+ key + " value : "+ obj2.getField(key));
    session.close();
    } catch (OperationException e) {
    e.printStackTrace();
    But I get errors like follow:
    -------------------error---------------------------------------
    E:\Jdeveloper10131_20090318\jdk\bin\javaw.exe -client -classpath E:\Jdeveloper10131_20090318\jdev\mywork\Java_be_invoked\getInstance\classes;D:\BPM_HOME_standalone\client\papi\lib\fuegopapi-client.jar getinstance.prepare
    1111111111111
    Creating connector [fuego:SQL]
    Creating ProcessService with id 'oracle/2009-03-31 18:08:47+08:00'.
    Local folder /tmp\system\Schema3387192-1796619082\catalogs found.
    Loading catalogs from local folder: /tmp\system\Schema3387192-1796619082\catalogs
    1 jars found locally.
    Local jar '126' succesfully loaded.
    [CatalogMgrCache] =======================
    Registering CatalogMgr [oracle/2009-03-31 18:08:47+08:00] ...CatalogManagerCache 14949315:
    Managers:
    Counters:
    [CatalogMgrCache] =======================
    CatalogMgr [oracle/2009-03-31 18:08:47+08:00] REGISTERED!CatalogManagerCache 14949315:
    Managers:
    {oracle/2009-03-31 18:08:47+08:00=fuego.util.LocalCatalogManager@1dacccc}
    Counters:
    ProcessService 'oracle/2009-03-31 18:08:47+08:00' created successfully.
    33333 instanceID=/InvokeJavaProcess#Default-1.0/203/0
    Unreachable Engine Tolerance (seconds):
    by default: 0
    to be used: 0
    This papi client will not cache exceptions which imply that an engine could not be reached.
    Adding local catalog for project: 124
    [CatalogLoaderMgrCache] =======================
    CatalogClassLoader[oracle/2009-03-31 18:08:47+08:00-124] added to cache
    Catalog ClassLoader MAP:
    {oracle/2009-03-31 18:08:47+08:00-124=CatalogClassLoader(FuegoObjectCatalog(catalogIn: 124, directoryId: oracle/2009-03-31 18:08:47+08:00))}
    Catalog Manager Cache:
    CatalogManagerCache 14949315:
    Managers:
    {oracle/2009-03-31 18:08:47+08:00=fuego.util.LocalCatalogManager@1dacccc}
    Counters:
    {oracle/2009-03-31 18:08:47+08:00=1}
    fuego.papi.exception.TaskFailedException: Task '0' in activity '/InvokeJavaProcess#Default-1.0/Interactive[InteractiveJava]' for instance '/InvokeJavaProcess#Default-1.0/203/0' could not be successfully executed. The task failed while executing method '%PREPARE%'.
         at fuego.papi.exception.TaskFailedException.create(TaskFailedException.java:57)
         at fuego.server.AbstractProcessBean.createTaskFailedException(AbstractProcessBean.java:3572)
         at fuego.fengine.FEngineProcessBean.createTaskFailedException(FEngineProcessBean.java:398)
         at fuego.server.AbstractProcessBean.runTask(AbstractProcessBean.java:3193)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
         at java.lang.reflect.Method.invoke(Unknown Source)
         at fuego.lang.JavaClass.invokeMethod(JavaClass.java:1410)
         at fuego.lang.JavaObject.invoke(JavaObject.java:227)
         at fuego.component.Message.process(Message.java:585)
         at fuego.component.ExecutionThread.processMessage(ExecutionThread.java:780)
         at fuego.component.ExecutionThread.processBatch(ExecutionThread.java:755)
         at fuego.component.ExecutionThread.doProcessBatch(ExecutionThread.java:142)
         at fuego.component.ExecutionThread.doProcessBatch(ExecutionThread.java:134)
         at fuego.fengine.FEngineProcessBean.processBatch(FEngineProcessBean.java:244)
         at fuego.component.ExecutionThread.work(ExecutionThread.java:839)
         at fuego.component.ExecutionThread.run(ExecutionThread.java:408)
         at fuego.component.CustomExecution.next(CustomExecution.java:176)
         at fuego.papi.impl.rmi.RMIExecution.next(RMIExecution.java:109)
         at fuego.papi.impl.ProcessInstanceOperation.prepareExternalActivity(ProcessInstanceOperation.java:695)
         at fuego.papi.impl.ProcessServiceSessionImpl.activityPrepare(ProcessServiceSessionImpl.java:1420)
         at fuego.papi.impl.ProcessServiceSessionImpl.activityPrepare(ProcessServiceSessionImpl.java:1414)
         at getinstance.prepare.main(prepare.java:43)
    Caused by: fuego.lang.ComponentExecutionException: The method 'CIL_interactiveJavaPrepare' from class 'oracle.InvokeJavaProcess.Default_1_0.Instance' could not be successfully executed.
         at fuego.component.ExecutionThreadContext.invokeMethod(ExecutionThreadContext.java:519)
         at fuego.component.ExecutionThreadContext.invokeMethod(ExecutionThreadContext.java:273)
         at fuego.fengine.FEEngineExecutionContext.invokeMethodAsCil(FEEngineExecutionContext.java:219)
         at fuego.server.execution.EngineExecutionContext.runCil(EngineExecutionContext.java:1280)
         at fuego.server.execution.TaskExecution.invoke(TaskExecution.java:401)
         at fuego.server.execution.InteractiveNormalCilExecution.invoke(InteractiveNormalCilExecution.java:425)
         at fuego.server.execution.TaskExecution.executeCIL(TaskExecution.java:513)
         at fuego.server.execution.TaskExecution.executeTask(TaskExecution.java:697)
         at fuego.server.execution.TaskExecution.executeTask(TaskExecution.java:657)
         at fuego.server.execution.TaskExecution.executeTask(TaskExecution.java:154)
         at fuego.server.execution.microactivity.InteractiveMicroActivity.executeNormalCil(InteractiveMicroActivity.java:501)
         at fuego.server.execution.microactivity.InteractiveMicroActivity.executeItem(InteractiveMicroActivity.java:454)
         at fuego.server.execution.microactivity.InteractiveMicroActivity.execute(InteractiveMicroActivity.java:104)
         at fuego.server.AbstractProcessBean$48.execute(AbstractProcessBean.java:3184)
         at fuego.server.execution.DefaultEngineExecution$AtomicExecutionTA.runTransaction(DefaultEngineExecution.java:304)
         at fuego.transaction.TransactionAction.startBaseTransaction(TransactionAction.java:470)
         at fuego.transaction.TransactionAction.startTransaction(TransactionAction.java:551)
         at fuego.transaction.TransactionAction.start(TransactionAction.java:212)
         at fuego.server.execution.DefaultEngineExecution.executeImmediate(DefaultEngineExecution.java:123)
         at fuego.server.execution.EngineExecution.executeImmediate(EngineExecution.java:66)
         at fuego.server.AbstractProcessBean.runTask(AbstractProcessBean.java:3188)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
         at java.lang.reflect.Method.invoke(Unknown Source)
         at fuego.lang.JavaClass.invokeMethod(JavaClass.java:1410)
         at fuego.lang.JavaObject.invoke(JavaObject.java:227)
         at fuego.component.Message.process(Message.java:585)
         at fuego.component.ExecutionThread.processMessage(ExecutionThread.java:780)
         at fuego.component.ExecutionThread.processBatch(ExecutionThread.java:755)
         at fuego.component.ExecutionThread.doProcessBatch(ExecutionThread.java:142)
         at fuego.component.ExecutionThread.doProcessBatch(ExecutionThread.java:134)
         at fuego.fengine.FEngineProcessBean.processBatch(FEngineProcessBean.java:244)
         at fuego.component.ExecutionThread.work(ExecutionThread.java:839)
         at fuego.component.ExecutionThread.run(ExecutionThread.java:408)
    Caused by: java.lang.NullPointerException
         at oracle.InvokeJavaProcess.Default_1_0.Instance.CIL_interactiveJavaPrepare(Instance.xcdl:1)
         at oracle.InvokeJavaProcess.Default_1_0.Instance.CIL_interactiveJavaPrepare(Instance.xcdl)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
         at java.lang.reflect.Method.invoke(Unknown Source)
         at fuego.component.ExecutionThreadContext.invokeMethod(ExecutionThreadContext.java:512)
         ... 34 more
    Process exited with exit code 0.
    ----------------------------------------------------------------

    Hi Satinder,
    This time I change something in the process.
    1, the process still has 3 interactive activities: jcooperkeyin, tojava, Interactivejstein.
    2, I create a BPMObject including amount (decimal, 2) , purpose
    3, the main task of jcooperkeyin is a screenflow: set values to purpose and amount
    4, tojava activity is external.
    Prepare method name: tojavaprepare
    method contents:
    arg1.amount=var1.amount
    arg1.purpose=var1.purpose
    Commit method: tojavacommit
    method contents:
    var1.purpose=arg2.purpose
    var1.amount=arg2.amount
    5, I create an instance variable name var1 and its type is BPMObjec
    6, the argument module is changed to out
    7, I change the java prepare method code as you commend:
    8, After I run the java prepare method, I get some new errors which as follow:
    --------------------------code----------------------------------------------------
    package getinstance;
    import fuego.lang.Decimal;
    import fuego.lang.DynamicObject;
    import fuego.papi.Arguments;
    import fuego.papi.CommunicationException;
    import fuego.papi.InstanceInfo;
    import fuego.papi.ProcessService;
    import fuego.papi.ProcessServiceSession;
    import fuego.papi.OperationException;
    import java.math.BigDecimal;
    import java.util.Iterator;
    import java.util.Map;
    import java.util.Properties;
    public class prepare {
    public prepare() {
    public static void main(String[] args) {
    prepare prepare = new prepare();
    Properties configuration = new Properties();
    configuration.setProperty(ProcessService.DIRECTORY_ID, "default");
    configuration.setProperty(ProcessService.DIRECTORY_PROPERTIES_FILE, "D:\\BPM_HOME_standalone\\webapps\\papiws\\WEB-INF\\directory.xml");
    configuration.setProperty(ProcessService.WORKING_FOLDER, "/tmp");
    System.out.println("1111111111111");
    Arguments argumentsResult = Arguments.create();
    String taskIn = "0";
    String activity = "tojava";
    //String activity = "jcooperkeyin";
    String processId = "/InvokeJava2#Default-1.0";
    //String instanceId = "/InvokeJava2#Default-1.0/281/0";
    Arguments arguments = Arguments.create();
    DynamicObject dyn = DynamicObject.create();
    dyn.setField("amount", new BigDecimal(100.0));
    dyn.setField("purpose", "---------");
    arguments.putArgument("var1", dyn);
    try {
    ProcessService processService = ProcessService.create(configuration);
    ProcessServiceSession session = processService.createSession("jcooper", "welcome1", "host");
    String ist="ist";
    for (InstanceInfo instance : session.processGetInstances(processId)) {
    System.out.println(" instance.getId()-> " + instance.getId());
    ist=instance.getId();
    System.out.println(" activity.getActivityId()-> " +instance.getActivityId());
    System.out.println(" activity.getActivityName()-> " +instance.getActivityName());
    System.out.println("----------111----------------------");
    argumentsResult = session.activityPrepare(activity, instance.getId(), arguments); //error starts from here
    System.out.println("-----2222-----------");
    DynamicObject obj = argumentsResult.getDynamicObject();
    Map fieldMaps = obj.asFieldsMap();
    for (Iterator<String> iterator = fieldMaps.keySet().iterator(); iterator.hasNext();) {
    String key = iterator.next();
    System.out.println("key : "+ key + "value" +obj.getField(key));
    DynamicObject obj2 = argumentsResult.getDynamicObject();
    Map fieldMaps2 = obj2.asFieldsMap();
    for (Iterator<String> it = fieldMaps2.keySet().iterator(); it.hasNext();) {
    System.out.println("key : "+ key + " value : "+ obj2.getField(key));
    System.out.println("444444444");
    session.close();
    } catch (OperationException e) {
    e.printStackTrace();
    ---------------------new error------------------------------------------------------
    E:\Jdeveloper10131_20090318\jdk\bin\javaw.exe -client -classpath E:\Jdeveloper10131_20090318\jdev\mywork\Java_be_invoked\getInstance\classes;D:\BPM_HOME_standalone\client\papi\lib\fuegopapi-client.jar;D:\BPM_HOME_standalone\client\papi\lib\b1base.jar;D:\BPM_HOME_standalone\client\papi\lib\b1oracle.jar;D:\BPM_HOME_standalone\client\papi\lib\b1util.jar getinstance.prepare
    1111111111111
    Creating connector [fuego:SQL]
    Creating ProcessService with id 'oracle/2009-03-31 18:08:47+08:00'.
    Local folder /tmp\system\Schema3387192-1796619082\catalogs found.
    Loading catalogs from local folder: /tmp\system\Schema3387192-1796619082\catalogs
    1 jars found locally.
    Local jar '181' succesfully loaded.
    [CatalogMgrCache] =======================
    Registering CatalogMgr [oracle/2009-03-31 18:08:47+08:00] ...CatalogManagerCache 14949315:
    Managers:
    Counters:
    [CatalogMgrCache] =======================
    CatalogMgr [oracle/2009-03-31 18:08:47+08:00] REGISTERED!CatalogManagerCache 14949315:
    Managers:
    {oracle/2009-03-31 18:08:47+08:00=fuego.util.LocalCatalogManager@1dacccc}
    Counters:
    ProcessService 'oracle/2009-03-31 18:08:47+08:00' created successfully.
    Unreachable Engine Tolerance (seconds):
    by default: 0
    to be used: 0
    This papi client will not cache exceptions which imply that an engine could not be reached.
    instance.getId()-> /InvokeJava2#Default-1.0/281/0
    Adding local catalog for project: 181
    activity.getActivityId()-> /InvokeJava2#Default-1.0/tojava
    activity.getActivityName()-> tojava
    ----------111----------------------
    [CatalogLoaderMgrCache] =======================
    CatalogClassLoader[oracle/2009-03-31 18:08:47+08:00-181] added to cache
    Catalog ClassLoader MAP:
    {oracle/2009-03-31 18:08:47+08:00-181=CatalogClassLoader(FuegoObjectCatalog(catalogIn: 181, directoryId: oracle/2009-03-31 18:08:47+08:00))}
    Catalog Manager Cache:
    CatalogManagerCache 14949315:
    Managers:
    {oracle/2009-03-31 18:08:47+08:00=fuego.util.LocalCatalogManager@1dacccc}
    Counters:
    {oracle/2009-03-31 18:08:47+08:00=1}
    Processing the synchronization information, instance '181:281:0' was updated.
    fuego.papi.OperationException: Operation exception.
         at fuego.papi.OperationException.wrap(OperationException.java:65)
         at fuego.papi.impl.ProcessInstanceOperation.prepareExternalActivity(ProcessInstanceOperation.java:706)
         at fuego.papi.impl.ProcessServiceSessionImpl.activityPrepare(ProcessServiceSessionImpl.java:1420)
         at fuego.papi.impl.ProcessServiceSessionImpl.activityPrepare(ProcessServiceSessionImpl.java:1414)
         at getinstance.prepare.main(prepare.java:58)
    Caused by: fuego.rmi.RMIRuntimeException: Fuego RMI: Failure during the invocation. Check the exception chain for details.
         at fuego.rmi.RemoteProxy.processBatch(RemoteProxy.java:192)
         at fuego.component.ExecutorClient.dispatch(ExecutorClient.java:190)
         at fuego.component.CustomExecution.next(CustomExecution.java:247)
         at fuego.papi.impl.rmi.RMIExecution.next(RMIExecution.java:109)
         at fuego.papi.impl.ProcessInstanceOperation.prepareExternalActivity(ProcessInstanceOperation.java:695)
         ... 3 more
    Caused by: fuego.rmi.spi.SerializationException: Unable to receive the message because of a serialization error.
         at fuego.rmi.spi.BaseConnection.send(BaseConnection.java:105)
         at fuego.rmi.ServerCluster.send(ServerCluster.java:210)
         at fuego.rmi.ServerCluster.sendResult(ServerCluster.java:461)
         at fuego.rmi.ServerCluster.access$300(ServerCluster.java:43)
         at fuego.rmi.ServerCluster$ClientRequest$1.put(ServerCluster.java:556)
         at fuego.component.ExecutionThread.sendResult(ExecutionThread.java:532)
         at fuego.component.ExecutionThreadContext.doClientInvoke(ExecutionThreadContext.java:695)
         at fuego.component.ClientRemoteComponent.doInvocation(ClientRemoteComponent.java:303)
         at fuego.component.ClientRemoteComponent.invokeRelayTo(ClientRemoteComponent.java:211)
         at fuego.component.ExecutionRelayedThrowable.execute(ExecutionRelayedThrowable.java:109)
         at fuego.server.execution.TaskExecution.handleExecutionRelayedThrowable(TaskExecution.java:816)
         at fuego.server.execution.TaskExecution.handleComponentExecutionException(TaskExecution.java:767)
         at fuego.server.execution.TaskExecution.executeCIL(TaskExecution.java:516)
         at fuego.server.execution.TaskExecution.executeTask(TaskExecution.java:697)
         at fuego.server.execution.TaskExecution.executeTask(TaskExecution.java:657)
         at fuego.server.execution.TaskExecution.executeTask(TaskExecution.java:154)
         at fuego.server.execution.microactivity.InteractiveMicroActivity.executeNormalCil(InteractiveMicroActivity.java:501)
         at fuego.server.execution.microactivity.InteractiveMicroActivity.executeItem(InteractiveMicroActivity.java:454)
         at fuego.server.execution.microactivity.InteractiveMicroActivity.execute(InteractiveMicroActivity.java:104)
         at fuego.server.AbstractProcessBean$48.execute(AbstractProcessBean.java:3184)
         at fuego.server.execution.DefaultEngineExecution$AtomicExecutionTA.runTransaction(DefaultEngineExecution.java:304)
         at fuego.transaction.TransactionAction.startBaseTransaction(TransactionAction.java:470)
         at fuego.transaction.TransactionAction.startTransaction(TransactionAction.java:551)
         at fuego.transaction.TransactionAction.start(TransactionAction.java:212)
         at fuego.server.execution.DefaultEngineExecution.executeImmediate(DefaultEngineExecution.java:123)
         at fuego.server.execution.EngineExecution.executeImmediate(EngineExecution.java:66)
         at fuego.server.AbstractProcessBean.runTask(AbstractProcessBean.java:3188)
         at sun.reflect.GeneratedMethodAccessor52.invoke(Unknown Source)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
         at java.lang.reflect.Method.invoke(Unknown Source)
         at fuego.lang.JavaClass.invokeMethod(JavaClass.java:1410)
         at fuego.lang.JavaObject.invoke(JavaObject.java:227)
         at fuego.component.Message.process(Message.java:585)
         at fuego.component.ExecutionThread.processMessage(ExecutionThread.java:780)
         at fuego.component.ExecutionThread.processBatch(ExecutionThread.java:755)
         at fuego.component.ExecutionThread.doProcessBatch(ExecutionThread.java:142)
         at fuego.component.ExecutionThread.doProcessBatch(ExecutionThread.java:134)
         at fuego.fengine.FEngineProcessBean.processBatch(FEngineProcessBean.java:244)
         at fuego.component.ExecutionThread.work(ExecutionThread.java:839)
         at fuego.component.ExecutionThread.run(ExecutionThread.java:408)
         ... 8 more
    Caused by: java.io.NotSerializableException: java.lang.Object
         at java.io.ObjectOutputStream.writeObject0(Unknown Source)
         at java.io.ObjectOutputStream.writeObject(Unknown Source)
         at java.util.HashMap.writeObject(Unknown Source)
         at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
         at java.lang.reflect.Method.invoke(Unknown Source)
         at java.io.ObjectStreamClass.invokeWriteObject(Unknown Source)
         at java.io.ObjectOutputStream.writeSerialData(Unknown Source)
         at java.io.ObjectOutputStream.writeOrdinaryObject(Unknown Source)
         at java.io.ObjectOutputStream.writeObject0(Unknown Source)
         at java.io.ObjectOutputStream.defaultWriteFields(Unknown Source)
         at java.io.ObjectOutputStream.writeSerialData(Unknown Source)
         at java.io.ObjectOutputStream.writeOrdinaryObject(Unknown Source)
         at java.io.ObjectOutputStream.writeObject0(Unknown Source)
         at java.io.ObjectOutputStream.writeArray(Unknown Source)
         at java.io.ObjectOutputStream.writeObject0(Unknown Source)
         at java.io.ObjectOutputStream.writeObject(Unknown Source)
         at fuego.component.Message.writeObject(Message.java:653)
         at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
         at java.lang.reflect.Method.invoke(Unknown Source)
         at java.io.ObjectStreamClass.invokeWriteObject(Unknown Source)
         at java.io.ObjectOutputStream.writeSerialData(Unknown Source)
         at java.io.ObjectOutputStream.writeOrdinaryObject(Unknown Source)
         at java.io.ObjectOutputStream.writeObject0(Unknown Source)
         at java.io.ObjectOutputStream.writeObject(Unknown Source)
         at fuego.component.Batch.writeObject(Batch.java:151)
         at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
         at java.lang.reflect.Method.invoke(Unknown Source)
         at java.io.ObjectStreamClass.invokeWriteObject(Unknown Source)
         at java.io.ObjectOutputStream.writeSerialData(Unknown Source)
         at java.io.ObjectOutputStream.writeOrdinaryObject(Unknown Source)
         at java.io.ObjectOutputStream.writeObject0(Unknown Source)
         at java.io.ObjectOutputStream.writeObject(Unknown Source)
         at fuego.rmi.Packet.write(Packet.java:240)
         at fuego.rmi.spi.BaseConnection.send(BaseConnection.java:102)
         at fuego.rmi.ServerCluster.send(ServerCluster.java:210)
         at fuego.rmi.ServerCluster.sendResult(ServerCluster.java:461)
         at fuego.rmi.ServerCluster.access$300(ServerCluster.java:43)
         at fuego.rmi.ServerCluster$ClientRequest$1.put(ServerCluster.java:556)
         at fuego.component.ExecutionThread.sendResult(ExecutionThread.java:532)
         at fuego.component.ExecutionThreadContext.doClientInvoke(ExecutionThreadContext.java:695)
         at fuego.component.ClientRemoteComponent.doInvocation(ClientRemoteComponent.java:303)
         at fuego.component.ClientRemoteComponent.invokeRelayTo(ClientRemoteComponent.java:211)
         at fuego.component.ExecutionRelayedThrowable.execute(ExecutionRelayedThrowable.java:109)
         at fuego.server.execution.TaskExecution.handleExecutionRelayedThrowable(TaskExecution.java:816)
         at fuego.server.execution.TaskExecution.handleComponentExecutionException(TaskExecution.java:767)
         at fuego.server.execution.TaskExecution.executeCIL(TaskExecution.java:516)
         at fuego.server.execution.TaskExecution.executeTask(TaskExecution.java:697)
         at fuego.server.execution.TaskExecution.executeTask(TaskExecution.java:657)
         at fuego.server.execution.TaskExecution.executeTask(TaskExecution.java:154)
         at fuego.server.execution.microactivity.InteractiveMicroActivity.executeNormalCil(InteractiveMicroActivity.java:501)
         at fuego.server.execution.microactivity.InteractiveMicroActivity.executeItem(InteractiveMicroActivity.java:454)
         at fuego.server.execution.microactivity.InteractiveMicroActivity.execute(InteractiveMicroActivity.java:104)
         at fuego.server.AbstractProcessBean$48.execute(AbstractProcessBean.java:3184)
         at fuego.server.execution.DefaultEngineExecution$AtomicExecutionTA.runTransaction(DefaultEngineExecution.java:304)
         at fuego.transaction.TransactionAction.startBaseTransaction(TransactionAction.java:470)
         at fuego.transaction.TransactionAction.startTransaction(TransactionAction.java:551)
         at fuego.transaction.TransactionAction.start(TransactionAction.java:212)
         at fuego.server.execution.DefaultEngineExecution.executeImmediate(DefaultEngineExecution.java:123)
         at fuego.server.execution.EngineExecution.executeImmediate(EngineExecution.java:66)
         at fuego.server.AbstractProcessBean.runTask(AbstractProcessBean.java:3188)
         at sun.reflect.GeneratedMethodAccessor52.invoke(Unknown Source)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
         at java.lang.reflect.Method.invoke(Unknown Source)
         at fuego.lang.JavaClass.invokeMethod(JavaClass.java:1410)
         at fuego.lang.JavaObject.invoke(JavaObject.java:227)
         at fuego.component.Message.process(Message.java:585)
         at fuego.component.ExecutionThread.processMessage(ExecutionThread.java:780)
         at fuego.component.ExecutionThread.processBatch(ExecutionThread.java:755)
         at fuego.component.ExecutionThread.doProcessBatch(ExecutionThread.java:142)
         at fuego.component.ExecutionThread.doProcessBatch(ExecutionThread.java:134)
         at fuego.fengine.FEngineProcessBean.processBatch(FEngineProcessBean.java:244)
         at fuego.component.ExecutionThread.work(ExecutionThread.java:839)
         at fuego.component.ExecutionThread.run(ExecutionThread.java:408)
    Process exited with exit code 0.

  • External Tasks

    I'm trying to use the implementation type "External" of interactive activities, but I'm having some difficulties. I don't understand how I make the client, also I don't know how the BPM engine knows whom to call.
    Does anyone have any sample project that can send me?
    Rodrigo Zuchetto

    Hi Rodrigo
    You can follow the following steps
    In BPM
    1. Under External Resource, create a new resource of type 'Server Configuration'. Specify your host, port and the servlet/jsp path.
    2. For the main task of interactive activity, choose implementaion type as 'External'.
    3. Create a prepare and commit method. Prepare method will be called from your servlet, and you can define arguments (as output args) that will be passed from BPM to your external java app
    4. Commit method will be called from your java app, once you are done with the processing. You can define args (as input) here. These args will be set from java app when it calls this commit method.
    In JavaApp
    1. For the servlet you define in step 1 above, define a default handler and initialize a papi session here.
    2. Call activityPrepare on this papisession, and get hold of the variables passed from step 3 above
    3. Do your processing
    4. Call activityCommit on the papisession and pass back the parameters
    HTH

  • External task and Multiple-join

    In a process, the order of activities is: Begin->interactive activity A -> multiple -> interactive activity B -> Join -> interactive activity C
    This process has 2 instance variable, var1 and var2.
    var1's type is BPMObject
    var2's type is String.
    PBL code in Mulitple is as follow.
    ---------------------------Multiple------------------------------
    participants as Participant[]
    ts as String[]
    ts=var1.selectParticipants;
    v as Int
    v=length(ts)-1
    for i2 in 0..v
    do
         participants(i2)=Participant.find(name : ts(i2))
    end
    for each p in participants do
    copy = clone(this)
    copy.participant.next = p
    copy.var1.voteResult="O"
    end
    The main task type of interactive activity B is set to external.
    The PBL code in the interactive activity B is as follow
    ------------------------interactive activity Prepare--------------------------------------------------------
    var1.voteResult=arg1
    arg2=var1.voteResult
    arg1 and arg2 in this interactive activity are fine.
    Interactive activity B does not have a commit method .
    The PBL code in the Join is below.
    ---------------------------------------------------Join-----------------------------------------------
    var2=var2+"*"+copy.var1.voteResult
    I also try: this.var2=this.var2+"*"+copy.var1.voteResult
    However, it only outputs *
    It only output *. The number of * equals the number of branch.
    The main task of Interactive activity C is an external task.
    This activity only has a very simple prepare method: arg1=var2
    arg 1 is the argument. I just want to see the value of var2. However, the value of var2 is just *
    I cannot get the value of each branch.
    How to get the correct value of each branch?

    I change the value type to Int. Then, it works.
    How to get the value of String type?

  • Output type not accepting Comm method 5 - External send

    Hi
    not sure whether this should be posted on the Abap forum or on here ???
    I have a shipping output type.
    assigned Partner functions and applicable mediums - 1, 2 and 5
    however when trying to produce ooutput via comm method 5, external send, i am gettting red error messages/ traffic ligh
    processing log is as follows:-
    "processing log for program   xxxxxxxx routine ENTRY
    Please enter an address number
    communication type       cannot be used"
    all outputs for medium types 2 Fax and 1 print are all OK for all Partner functions involved
    please could someone provide our Abapper with the missing coding that is needed to accomodate the External send functionality
    Many thanks for your help on this matter
    Tony

    Tony
    Check if you have done the following:
    1) Maintained Email address in the customer master data?
    2) Maintained Email address in the user  master data, this could be the generic/batch user you use for outputs.
    3) Set up communication strategy in config for external sending: SPRO/IMG/Sales and Distribution/Basic Functions/Output Control/Determine Communication Strategy.
    4) Set up settings for SAPconnect  using SCOT?
    Please follow the following OSS notes for more details and ensure that you have done all those steps:
    454893 - CHECKLISTSD: Sales document output as an e-mail
    960088 - FAQ: Sending SD messages externally
    Let me know how it goes.

  • I am trying to copy more or less 30G from my MacbookPro to an external hard drive and it is stuck in the "preparing to copy" step. But that for more than one hour. What should I do to make it faster? Thanks a lot in advance!

    I am trying to copy more or less 30G from my MacbookPro to an external hard drive and it is stuck in the "preparing to copy" step. But that for more than one hour. What should I do to make it faster? Thanks a lot in advance!

    Thanks Shootist007, by blockd files I mean files that I have changed to blocked and when I tried to move then for the first time, I had to unblock again. I am trying to backup my pictures, my songs and other files like word documents and excel tables. First I put all of them as blocked, what caused the first problems on trying to move them. Then, I've unchecked the block option and if I try to move one by one, there is no problem. The issue is to move all together, because it gets stuck in the preparing to copy files step. Anyway, if I cannot do all by once, I'll do it one by one, event though that was not suposed to happen if we are talking about technology, right? Anyway, I thank you again for trying to help me!

  • What is the difference between on new focus and do prepare out put methods

    Hi,
    what is the difference between on new focus and do prepare out put methods.

    Hi Divya,
    DO_PREPARE_OUTPUT method is triggered each time the view is prepared after each event. Normally we redefine this method in order default the initial values based upon the parameter iv_first_time = 'X'. There can be certain other scenarios like putting data validations where this method can be helpful.
    For details on ON_NEW_FOCUS, refer to this [thread|on_new_focus event;
    Thanks
    Vishal

  • External task tutorial

    Hi
    I have tried to look everywhere for some tutorial or example on how to use external tasks (in Oracle BPM 10gr3). What must be done in the studio and what has to be done in the code. Does anyone have any examples or tutorials on how to use external tasks? What exactly means prepare method? Is it that when (external) client calls prepareExternalActivity()-method, this specified method in BPM-engine is executed. But how exactly this can be done?
    Another question is that does Oracle BPM 10gr3 support asynchronous web services ? Can I call asynchronous web service from process?
    Thanks for advance
    Best Regards Tuomas

    If you look at the Variables view on the left when editing your prepare method there should be an arguments section. An argument marked as an output argument in this method will be passed to the client when the prepare method is called. Passing a BPM Object will require a lot more code on the client side though, because those class definitions won't exist there. The simpler way is to pass out a Java object that is cataloged into BPM and have the same java class on the client side, or pass out a XML string that can be parsed by the client then.

  • ADF  task-flow and transactions

    Hi All,
    I have created a web application using ADF and JDeveloper 11.1.1.4. The ADF web application has two pages 'Search' and 'Edit'. The 'Search' and 'Edit' are page fragments. I have two bounded task flows 'search-flow.xml and 'edit-flow.xml'. The search and edit functionality has been created as dynamic regions by dragging the task-flows onto the JSF page.
    The bounded search-flow.xml has page flow from search ----> edit(Parent Action component)
    The bounded edit-flow.xml has page flow as *-->edit----> search(Parent Action component)
    User comes from login page to the search page. On search page users can choose to search for records and edit individual searched records or create a new record using 'CreateInsert' button. Either way users end up on the edit page.
    On the edit page, users see a form to fill or modify. They have the choice to navigate back to search page using 'Search' button or commit changed record using 'Save' button or create new record using 'Create' button
    The application is functioning 90% as expected, only problem is the transaction management part if the user does not care to save each time the edit is made.
    Say for instance, on search page user press ' CreateInsert' button which takes the user to edit page and then without saving the new record user presses 'Search' button which takes the user to search page and so repeated goes back and forth cycle.
    In such cases the transaction never ends, The primary key for each new record is retrieved from the DB sequence only on persisting to database. A place holder like negative integer is used for the primary key temporarily till new record is persisted to database.
    As per best practices, I am using the checkUncommittedDatabehavior which prompts the user when navigation is done away from the edit page i.e. without saving on pressing the 'Search' button on edit page. User can choose 'OK' or 'Cancel' which comes bundled with the ADF checkUncommittedDatabehavior property. Even after selecting OK the new record does not seems to be rollbacked (Issue 1).
    eg:, if the user performs the cycle(i.e navigating between search and edit pages without commit/save) 'n' times, new entity records are created with primary keys -1, -2, -3, -4 ........ so forth. As I said earlier a proper sequence number for the primary key is assigned only if a record is persisted to the database.
    Now on the 'n+1' cycle the user fills the records and clicks commit, it comes up with a error 'primary keys with nulls from previous uncommitted record (i.e; 1,2,3, .....n) creation.
    I guess the transaction management is not happening correctly and not completing during these cycles (1, 2...n) Note: I am not using 'Task Return' components in my bounded ADF task flows, just commit operation 'Save' button in edit page and checkUncommittedDatabehavior.
    Also, in search-flow.xml I have following configured under 'Behavior' tab Transaction ---> No Controller transaction and ' Share data controls with calling task flow checked
    in edit-flow.xml, these configurations are in place under 'Behavior' tab Transaction ---> Always begin new transaction and ' Share data controls with calling task flow checked and Critical checked and 'task flow rentry' as reentry-outcome dependant.
    Thanks
    Edited by: user5108636 on Oct 26, 2011 5:16 PM
    Edited by: user5108636 on Oct 26, 2011 5:24 PM

    Hi Frank,
    I checked the DB sequence is setup correctly. I have also modified the problem description above for better understanding.
    2. When users press OK, navigate to a commit method activity (just drag and drop the commit operation).
    -- When user presses OK, I need to rollback instead of commit to make users lose the unsaved work and controller should navigate to the search page. Here, the message panel with Cancel and OK is shown automatically with the <af:checkUncommittedDataBehavior/> property, I thought the framework will do the rollback for me. If I need to do rollback manually, how to do it, because I cannot see the JSF related source code for the <af:checkUncommittedDataBehavior/> dialog box. Please suggest what to do
    3. If they press cancel, have the return activity rolling back to the save point taken when the user entered the bounded task flow (edit task flow)
    -- If they press cancel, user stays on the same page, so I will leave future action to the user. This bit is working fine in my app.
    Thanks
    Edited by: user5108636 on Oct 26, 2011 5:23 PM

  • What is the diffreence between call transaction and session method

    hi gurus
    can any one suggest me
    what is the difference between call transaction and session methods
    in which cases we have to use teh call transaction and
    in which cases we have to use session method.
    thank you
    regards
    kals.

    CLASSICAL BATCH INPUT (Session Method)
    CALL TRANSACTION
    BATCH INPUT METHOD:
    This method is also called as &#8216;CLASSICAL METHOD&#8217;.
    Features:
    Asynchronous processing.
    Synchronous Processing in database update.
    Transfer data for more than one transaction.
    Batch input processing log will be generated.
    During processing, no transaction is started until the previous transaction has been written to the database.
    CALL TRANSACTION METHOD :
    This is another method to transfer data from the legacy system.
    Features:
    Synchronous processing. The system performs a database commit immediately before and after the CALL TRANSACTION USING statement.
    Updating the database can be either synchronous or asynchronous. The program specifies the update type.
    Transfer data for a single transaction.
    Transfers data for a sequence of dialog screens.
    No batch input processing log is generated.
    http://www.sap-img.com/abap/difference-between-batch-input-and-call-transaction-in-bdc.htm
    Among the two methods call transaction is better compared to session bcoz data transfer is faster in it.
    Differences between call transaction and session.
    Session Method:
    1) Data is not updated in the database table until the session is processed.
    2) No sy-subrc is returned.
    3) Error log is created for error records.
    4) Updation is always synchronous.
    Call Transaction Method:
    1) Immediate updation in the database table.
    2) sy-subrc is returned.
    3)Error need to be handled explicitly.
    4) updation can be synchronous as well as asynchronous.
    2) ya u can use using the N mode no screen.
    3)u can't handle multiple transaction in call transaction.
    4) u can handle multiple transaction in session using the BDC_INSERT fm.
    5)When u go to SM35 u can able to see the error records.
    Which is best?
    That depends on your requirement. Both of them have there advantages.
    According to the situation u can choose any one of these.
    difference between batch input and call transaction in BDC Session method.
    1) synchronous processing.
    2) can tranfer large amount of data.
    3) processing is slower.
    4) error log is created
    5) data is not updated until session is processed.
    Call transaction.
    1) asynchronous processing
    2) can transfer small amount of data
    3) processing is faster.
    4) errors need to be handled explicitly
    5) data is updated automatically
    For session method,these are the function modules to b used.
    BDC_OPEN_GROUP
    BDC_INSERT
    BDC_CLOSE_GROUP
    For call transaction,this is the syntax.
    CALL TRANSACTION TCODE USING BDCDATA
    MODE A or E or N
    UPDATE A or S
    MESSAGES INTO MESSTAB.
    Take a scenario where we need to post documents in FB01 and the input file has say 2000 records (2000 documents, not line items in FB01 but 2000 records)
    In the BDC call transaction method
    We call the transaction FB01 2000 times (once for each record posting) and if the processing fails in record no 3 it can be captured and start with reocord 4.
    Eg: Loop at itab.
    call transaction FB01
    capture errors
    endloop.
    In the session method.
    We do not explicity call the transaction 2000 times, but all the records are appeneded into a session and this session is stored. The processinf of the session is done wwhenever the user wants it to be done. Hence the errors cannot be captured in the program itself
    Check these link:
    http://www.sap-img.com/abap/difference-between-batch-input-and-call-transaction-in-bdc.htm
    http://www.sap-img.com/abap/question-about-bdc-program.htm
    http://www.itcserver.com/blog/2006/06/30/batch-input-vs-call-transaction/
    http://www.planetsap.com/bdc_main_page.htm
    Batch Input Session method is asynchronous as told by others here. But the advantage of this is that you have all the error messages and the data for each transaction held persistantly. You don't have to code anything for processing them or writing the logs.
    But at the same time, the same feature can be disadvantageous if you need to react to an error or if there are too many errors to manually correct in a session. Since the session are created in the program and its execution is done seperately, you loose the trackability of such transactions.
    With a call transaction, what was a disadvantage above will become an advantage. Call transaction immediately gives you messages back and you can react to it in your program. But the disadvantage is that, if you have several hundreds of transactions to run, running them from within the program can be resource crunching affair. It will hamper the system performance and you cannot really distribute the load. Of course, you have some mechanisms with which you can overcome this, but you will have to code for it. Also, storing the messages and storing the errored transaction data etc will have to be handled by you in the program. Whereas, in batch input session, your program's job is to just create the session, after that everything is standard SAP system's responsibility.
    Ideally, you should do a call transaction if the resources are not a problem and if it fails, put the errored transaction into a session.
    You can decide based on the data volume that your BDC is processing. If data volume is high go for session else call transaction will do.The call transaction updates will be instantaneous where as session needs to be processed explictly after creation.
    Session Method
    1) Session method supports both small amount of data aswell as large amount of data
    2) data processing is asynchronus and data updation is synchronus.
    3) it process multiple apllication while perfomaning validations.
    4) in session method data will be updated in data base only after processing session only.
    5) system provide by default logfile for handling error records.
    6) it supports both foreground aswell as background process
    in bdc we use FM ...
    bdc_open_group " for creating Session
    bdc_insert " adding transaction and bdcdata table for updating database
    bdc_close_group " for closing Session
    Call Transaction
    1) Call transaction exclusively for small amout of data
    2) it supports only one apllication while perfoming validations
    3) there is no default logfile, We can explicitly provide logic for creating logfile for handling error records.
    we can create logfile by using structure....BDCMSGCOLL
    4) it doesn't support background processing.
    5) data processing is synchronous and Data updation is Synchronous( default), in
    this method also supports daya updation in asynchronus process also.
    syntax:
    Call transaction <transaction-name> using BDCDATA
    mode <A/N/E>
    update <L/A/S>
    messages into BDCMSGCOLL.
    BDC:
    Batch Data Communication (BDC) is the process of transferring data from one SAP System to another SAP system or from a non-SAP system to SAP System.
    Features :
    BDC is an automatic procedure.
    This method is used to transfer large amount of data that is available in electronic medium.
    BDC can be used primarily when installing the SAP system and when transferring data from a legacy system (external system).
    BDC uses normal transaction codes to transfer data.
    Types of BDC :
    CLASSICAL BATCH INPUT (Session Method)
    CALL TRANSACTION
    BATCH INPUT METHOD:
    This method is also called as &#8216;CLASSICAL METHOD&#8217;.
    Features:
    Asynchronous processing.
    Synchronous Processing in database update.
    Transfer data for more than one transaction.
    Batch input processing log will be generated.
    During processing, no transaction is started until the previous transaction has been written to the database.
    CALL TRANSACTION METHOD :
    This is another method to transfer data from the legacy system.
    Features:
    Synchronous processing. The system performs a database commit immediately before and after the CALL TRANSACTION USING statement.
    Updating the database can be either synchronous or asynchronous. The program specifies the update type.
    Transfer data for a single transaction.
    Transfers data for a sequence of dialog screens.
    No batch input processing log is generated.
    For BDC:
    http://myweb.dal.ca/hchinni/sap/bdc_home.htm
    https://www.sdn.sap.com/irj/sdn/wiki?path=/display/home/bdc&
    http://www.sap-img.com/abap/learning-bdc-programming.htm
    http://www.sapdevelopment.co.uk/bdc/bdchome.htm
    http://www.sap-img.com/abap/difference-between-batch-input-and-call-transaction-in-bdc.htm
    http://help.sap.com/saphelp_47x200/helpdata/en/69/c250684ba111d189750000e8322d00/frameset.htm
    http://www.sapbrain.com/TUTORIALS/TECHNICAL/BDC_tutorial.html
    Check these link:
    http://www.sap-img.com/abap/difference-between-batch-input-and-call-transaction-in-bdc.htm
    http://www.sap-img.com/abap/question-about-bdc-program.htm
    http://www.itcserver.com/blog/2006/06/30/batch-input-vs-call-transaction/
    http://www.planetsap.com/bdc_main_page.htm
    call Transaction or session method ?
    Check the following links:
    http://www.sap-img.com/bdc.htm
    See below example code :
    Call three FM : BDC_OPEN_GROUP,BDC_INSERT and BDC_CLOSE_GROUP.
    Once you execute the program and it creates the session at SM35 Transaction.
    Report : ZMPPC011
    Type : Data upload
    Author : Chetan Shah
    Date : 05/05/2005
    Transport : DV3K919557
    Transaction: ??
    Description: This ABAP/4 Program creates new Production Versions
    (C223). It accepts tab-delimited spreadsheet input and
    creates BDC sessions.
    Modification Log
    Date Programmer Request # Description
    06/10/2005 Chetan Shah DV3K919557 Initial coding
    report zmppc011 no standard page heading line-size 120 line-count 55
    message-id zz.
    pool of form routines
    include zmppn001.
    Define BDC Table Structure
    data: begin of itab_bdc_tab occurs 0.
    include structure bdcdata.
    data: end of itab_bdc_tab.
    Input record layout of Leagcy File
    data: begin of itab_xcel occurs 0,
    matnr(18) type c,
    werks(4) type c,
    alnag(2) type c,
    verid(4) type c,
    text1(40) type c,
    bstmi like mkal-bstmi,
    bstma like mkal-bstma,
    adatu(10) type c,
    bdatu(10) type c,
    stlal(2) type c,
    stlan(1) type c,
    serkz(1) type c,
    mdv01(8) type c,
    elpro(4) type c,
    alort(4) type c,
    end of itab_xcel.
    data: begin of lt_pp04_cache occurs 0,
    matnr like itab_xcel-matnr,
    werks like itab_xcel-werks,
    alnag like itab_xcel-alnag,
    plnnr like mapl-plnnr,
    arbpl like crhd-arbpl,
    ktext like crtx-ktext,
    end of lt_pp04_cache.
    data: v_ssnnr(4) type n,
    v_lines_in_xcel like sy-tabix,
    v_ssnname like apqi-groupid,
    v_trans_in_ssn type i,
    wa_xcel LIKE itab_xcel,
    l_tabix like sy-tabix,
    v_matnr like rc27m-matnr,
    v_plnnr like mapl-plnnr,
    v_plnal like mapl-plnal,
    v_tcode like sy-tcode value 'C223',
    v_plnty like plas-plnty value 'R',
    v_objty like crhd-objty value 'A',
    v_plpo_steus like plpo-steus value 'PP04',
    v_verwe like crhd-verwe value '0007'.
    Parameters
    selection-screen: skip 3.
    selection-screen: begin of block 1 with frame.
    parameters: p_name like rlgrap-filename
    default 'C:\My Documents\InputFile.txt'
    obligatory,
    bdc session name prefix
    p_bdcpfx(6) default 'ZPVCRT'
    obligatory,
    number for transction per BDC session
    p_trnssn type i
    default 2000 obligatory,
    retain the BDC session after successfull execution
    p_keep like apqi-qerase
    default 'X',
    user who will be executing BDC session
    p_uname like apqi-userid
    default sy-uname
    obligatory.
    selection-screen: end of block 1.
    possible entry list (F4 dropdown) for input file name
    at selection-screen on value-request for p_name.
    *-SELECT FILE FROM USERS LOCAL PC
    call function 'WS_FILENAME_GET'
    exporting
    DEF_FILENAME = ' '
    def_path = 'C:\Temp\'
    mask = ',.,..'
    mode = 'O'
    title = 'Select File '(007)
    importing
    filename = p_name
    RC =
    exceptions
    inv_winsys = 1
    no_batch = 2
    selection_cancel = 3
    selection_error = 4
    others = 5.
    if sy-subrc 0.
    MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
    WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
    endif.
    begin the show
    start-of-selection.
    read data from input file
    perform transfer_xcel_to_itab.
    loop at itab_xcel.
    hang on to xcel line num
    l_tabix = sy-tabix.
    each line in the xcel file marks begining of new prod.version defn
    if num-of-trnas-in-session = 0, create new BDC session
    if v_trans_in_ssn is initial.
    perform bdc_session_open.
    endif.
    begin new bdc script for rtg create trans
    fill in bdc-data for prod.version maintenance screens
    perform bdc_build_script.
    insert the bdc script as a BDC transaction
    perform bdc_submit_transaction.
    keep track of how many BDC transactions were inserted in the BDC
    session
    add 1 to v_trans_in_ssn.
    if the user-specified num of trans in BDC session is reached OR
    if end of input file is reached, close the BDC session
    if v_trans_in_ssn = p_trnssn or
    l_tabix = v_lines_in_xcel.
    perform bdc_session_close.
    clear v_trans_in_ssn.
    endif.
    endloop.
    top-of-page.
    call function 'Z_HEADER'
    EXPORTING
    FLEX_TEXT1 =
    FLEX_TEXT2 =
    FLEX_TEXT3 =
    FORM TRANSFER_XCEL_TO_ITAB *
    Transfer Xcel Spreadsheet to SAP Internal Table *
    form transfer_xcel_to_itab.
    Read the tab-delimited file into itab
    call function 'WS_UPLOAD'
    exporting
    filename = p_name
    filetype = 'DAT'
    IMPORTING
    filelength = flength
    tables
    data_tab = itab_xcel
    exceptions
    conversion_error = 1
    file_open_error = 2
    file_read_error = 3
    invalid_table_width = 4
    invalid_type = 5
    no_batch = 6
    unknown_error = 7
    others = 8.
    if sy-subrc = 0.
    sort the data
    sort itab_xcel by matnr werks.
    clear v_lines_in_xcel.
    if no data in the file - error out
    describe table itab_xcel lines v_lines_in_xcel.
    if v_lines_in_xcel is initial.
    write: / 'No data in input file'.
    stop.
    endif.
    else.
    if file upload failed - error out
    write: / 'Error reading input file'.
    stop.
    endif.
    endform.
    FORM BDC_SESSION_OPEN *
    Open BDC Session *
    form bdc_session_open.
    create bdc session name = prefix-from-selectn-screen + nnnn
    add 1 to v_ssnnr.
    concatenate p_bdcpfx v_ssnnr into v_ssnname.
    open new bdc session
    call function 'BDC_OPEN_GROUP'
    exporting
    client = sy-mandt
    group = v_ssnname
    keep = p_keep
    user = p_uname
    exceptions
    client_invalid = 1
    destination_invalid = 2
    group_invalid = 3
    group_is_locked = 4
    holddate_invalid = 5
    internal_error = 6
    queue_error = 7
    running = 8
    system_lock_error = 9
    user_invalid = 10
    others = 11.
    endform.
    FORM BDC_BUILD_SCRIPT *
    Build BDC *
    form bdc_build_script.
    data: l_arbpl like crhd-arbpl,
    l_text1 like mkal-text1,
    l_mdv01 like mkal-mdv01,
    l_mapl like mapl.
    clear bdc-data itab - begin of new bdc transaction
    clear itab_bdc_tab.
    refresh itab_bdc_tab.
    read material cross reference tables to determine sap part#
    clear : v_matnr, v_plnnr, v_plnal.
    perform read_matnr_cross_ref using itab_xcel-matnr
    itab_xcel-werks
    changing v_matnr.
    determine the version description to use
    if itab_xcel-text1 is initial.
    l_text1 = itab_xcel-verid.
    else.
    l_text1 = itab_xcel-text1.
    endif.
    determine the routing group# and group ctr# to use
    perform read_routing .
    determine the production line to use
    if itab_xcel-mdv01 is initial.
    if not provided in the file then:
    prod line = work ctr on the last PP04 op of the rtg determined above
    perform read_wc_on_last_pp04 using v_plnnr v_plnal
    changing l_mdv01.
    NOTE: when executing the above form\routine, if v_plnnr is initial
    or v_plnal is initial, THEN l_mdv01 will automatically be
    returned blank (ie initial)
    else.
    l_mdv01 = itab_xcel-mdv01.
    endif.
    build bdc script
    perform bdc_build_script_record
    fill in initial screen
    using: 'X' 'SAPLCMFV' '1000',
    ' ' 'BDC_OKCODE' '=ENTE',
    ' ' 'MKAL-WERKS' itab_xcel-werks,
    ' ' 'MKAL-MATNR' v_matnr,
    ' ' 'MKAL_ADMIN-DISPO' space,
    ' ' 'MKAL-PLNNR' space,
    ' ' 'MKAL_ADMIN-STTAG' space,
    ' ' 'MKAL-PLNNG' space,
    ' ' 'MKAL-MDV01' space,
    ' ' 'MKAL-PLNNM' space,
    click create button on initial screen and go to detail screen
    'X' 'SAPLCMFV' '1000',
    ' ' 'BDC_OKCODE' '=CREA',
    fill in the detail screen and go back to initial screen
    'X' 'SAPLCMFV' '2000',
    ' ' 'BDC_OKCODE' '=CLOS',
    ' ' 'MKAL_EXPAND-MATNR' v_matnr,
    ' ' 'MKAL_EXPAND-VERID' itab_xcel-verid,
    ' ' 'MKAL_EXPAND-TEXT1' l_text1,
    ' ' 'MKAL_EXPAND-BSTMI' itab_xcel-bstmi,
    ' ' 'MKAL_EXPAND-BSTMA' itab_xcel-bstma,
    ' ' 'MKAL_EXPAND-ADATU' itab_xcel-adatu,
    ' ' 'MKAL_EXPAND-BDATU' itab_xcel-bdatu,
    ' ' 'MKAL_EXPAND-PLTYG' v_plnty,
    ' ' 'MKAL_EXPAND-PLNNG' v_plnnr,
    ' ' 'MKAL_EXPAND-ALNAG' v_plnal,
    ' ' 'MKAL_EXPAND-STLAL' itab_xcel-stlal,
    ' ' 'MKAL_EXPAND-STLAN' itab_xcel-stlan,
    ' ' 'MKAL_EXPAND-SERKZ' itab_xcel-serkz,
    ' ' 'MKAL_EXPAND-MDV01' l_mdv01,
    ' ' 'MKAL_EXPAND-ELPRO' itab_xcel-elpro,
    ' ' 'MKAL_EXPAND-ALORT' itab_xcel-alort,
    save the production version from initial screen
    'X' 'SAPLCMFV' '1000',
    ' ' 'BDC_OKCODE' '=SAVE'.
    endform.
    FORM BDC_SUBMIT_TRANSACTION *
    Submit BDC Session *
    form bdc_submit_transaction.
    Load BDC script as a trqansction in BDC session
    call function 'BDC_INSERT'
    exporting
    tcode = v_tcode
    tables
    dynprotab = itab_bdc_tab
    exceptions
    internal_error = 01
    not_open = 02
    queue_error = 03
    tcode_invalid = 04.
    endform.
    FORM BDC_BUILD_SCRIPT_RECORD *
    form bdc_build_script_record using dynbegin name value.
    clear itab_bdc_tab.
    if dynbegin = 'X'.
    move: name to itab_bdc_tab-program,
    value to itab_bdc_tab-dynpro,
    'X' to itab_bdc_tab-dynbegin.
    else.
    move: name to itab_bdc_tab-fnam,
    value to itab_bdc_tab-fval.
    shift itab_bdc_tab-fval left deleting leading space.
    endif.
    append itab_bdc_tab.
    endform.
    FORM BDC_SESSION_CLOSE *
    Close BDC Session *
    form bdc_session_close.
    close the session
    call function 'BDC_CLOSE_GROUP'
    exceptions
    not_open = 1
    queue_error = 2
    others = 3.
    skip 2.
    if sy-subrc ne 0.
    write: / 'Error Closing BDC Session ' , 'RETURN CODE: ', sy-subrc.
    else.
    write : / 'Session created:', v_ssnname,
    50 '# of transactions:', v_trans_in_ssn.
    endif.
    endform.
    *& Form read_routing_cache
    *FORM read_routing_cache USING pi_matnr
    pi_werks
    pi_alnag
    pi_verid
    pi_mdv01.
    DATA: BEGIN OF lt_plpo OCCURS 0,
    vornr LIKE plpo-vornr,
    objty LIKE crhd-objty,
    objid LIKE crhd-objid,
    arbpl LIKE crhd-arbpl,
    END OF lt_plpo,
    l_mapl_plnnr LIKE mapl-plnnr.
    determine the routing group#
    CLEAR lt_pp04_cache.
    chk if its in the cache first, if not then get it from MAPL table
    and put it in the cache
    READ TABLE lt_pp04_cache WITH KEY matnr = pi_matnr
    werks = pi_werks
    alnag = pi_alnag.
    IF sy-subrc = 0.
    do nothing - lt_pp04_cache header line has rtg#
    ELSE.
    get the routing group # from MAPL
    SELECT plnnr INTO l_mapl_plnnr
    FROM mapl UP TO 1 ROWS
    WHERE matnr = pi_matnr AND
    werks = pi_werks AND
    plnty = 'R' AND
    plnal = pi_alnag AND
    loekz = space.
    ENDSELECT.
    put it in the cache internal table
    IF NOT l_mapl_plnnr IS INITIAL.
    lt_pp04_cache-matnr = pi_matnr.
    lt_pp04_cache-werks = pi_werks.
    lt_pp04_cache-alnag = pi_alnag.
    lt_pp04_cache-plnnr = l_mapl_plnnr.
    APPEND lt_pp04_cache.
    ENDIF.
    ENDIF.
    if the rtg# was determined AND
    the work center was not determined yet AND
    work center was really needed for this line in the input file
    then
    read the work center from last PP04 operation on the routing
    update the cache accordingly
    IF NOT lt_pp04_cache-plnnr IS INITIAL AND
    lt_pp04_cache-arbpl IS INITIAL AND
    ( pi_verid IS INITIAL OR
    pi_mdv01 IS INITIAL ).
    read the last PP04 operation
    CLEAR lt_plpo.
    REFRESH lt_plpo.
    SELECT vornr eobjty eobjid e~arbpl
    INTO CORRESPONDING FIELDS OF TABLE lt_plpo
    FROM plas AS b
    INNER JOIN plpo AS c
    ON bplnty = cplnty AND
    bplnnr = cplnnr AND
    bzaehl = czaehl
    INNER JOIN crhd AS e
    ON carbid = eobjid
    WHERE b~plnty = v_plnty AND
    b~plnnr = lt_pp04_cache-plnnr AND
    b~plnal = lt_pp04_cache-alnag AND
    c~loekz = space AND
    c~steus = v_plpo_steus AND
    e~objty = v_objty AND
    e~werks = lt_pp04_cache-werks AND
    e~verwe = v_verwe.
    SORT lt_plpo BY vornr DESCENDING.
    READ TABLE lt_plpo INDEX 1.
    IF NOT lt_plpo-arbpl IS INITIAL.
    lt_pp04_cache-arbpl = lt_plpo-arbpl.
    read work center description
    SELECT SINGLE ktext INTO lt_pp04_cache-ktext
    FROM crtx WHERE objty = lt_plpo-objty AND
    objid = lt_plpo-objid AND
    spras = sy-langu.
    the following read will get the index of the correct record to be
    updated in the cache
    READ TABLE lt_pp04_cache
    WITH KEY matnr = pi_matnr
    werks = pi_werks
    alnag = pi_alnag.
    MODIFY lt_pp04_cache
    INDEX sy-tabix
    TRANSPORTING arbpl ktext.
    ENDIF.
    ENDIF.
    *ENDFORM. " read_last_pp04_operation_cache
    *& Form read_routing
    form read_routing.
    data: begin of lt_mapl occurs 0,
    plnnr like mapl-plnnr,
    plnal like mapl-plnal,
    end of lt_mapl,
    l_arbpl like crhd-arbpl.
    get all the rtg# and grp ctr# from MAPL
    select plnnr plnal
    into corresponding fields of table lt_mapl
    from mapl
    where matnr = v_matnr and
    werks = itab_xcel-werks and
    plnty = v_plnty and "Rate Routing
    loekz = space. "with del flag = OFF
    sort lt_mapl by plnal.
    if not itab_xcel-verid is initial.
    if the verid=0001 then use the 1st good rtg-grp# and grp-ctr#
    if itab_xcel-verid = '0001'.
    read table lt_mapl index 1.
    v_plnnr = lt_mapl-plnnr.
    v_plnal = lt_mapl-plnal.
    else.
    if the verid0001 then use the rtg-grp# and grp-ctr# of the routing
    whose work center on the last PP04 operation matches the given verid
    loop at lt_mapl.
    clear l_arbpl.
    get the work center from the last PP04 operation
    perform read_wc_on_last_pp04 using lt_mapl-plnnr
    lt_mapl-plnal
    changing l_arbpl.
    if itab_xcel-verid = l_arbpl.
    v_plnnr = lt_mapl-plnnr.
    v_plnal = lt_mapl-plnal.
    exit.
    endif.
    endloop.
    endif.
    else.
    do nothing
    endif.
    For version IDs that are other then '0000' or 'ZWIP' :--
    if itab_xcel-verid NE '0000' and
    itab_xcel-verid NE 'ZWIP'.
    if routing group# or group counter was not determined, make the
    valid-to date 99/99/9999 so that the BDC, on execution, errors out.
    if v_plnnr is initial or
    v_plnal is initial.
    itab_xcel-bdatu = '99/99/9999'.
    endif.
    endif.
    determine the routing group#
    CLEAR lt_pp04_cache.
    chk if its in the cache first, if not then get it from MAPL table
    and put it in the cache
    READ TABLE lt_pp04_cache WITH KEY matnr = pi_matnr
    werks = pi_werks
    alnag = pi_alnag.
    IF sy-subrc = 0.
    do nothing - lt_pp04_cache header line has rtg#
    ELSE.
    get the routing group # from MAPL
    put it in the cache internal table
    IF NOT l_mapl_plnnr IS INITIAL.
    lt_pp04_cache-matnr = pi_matnr.
    lt_pp04_cache-werks = pi_werks.
    lt_pp04_cache-alnag = pi_alnag.
    lt_pp04_cache-plnnr = l_mapl_plnnr.
    APPEND lt_pp04_cache.
    ENDIF.
    ENDIF.
    if the rtg# was determined AND
    the work center was not determined yet AND
    work center was really needed for this line in the input file
    then
    read the work center from last PP04 operation on the routing
    update the cache accordingly
    IF NOT lt_pp04_cache-plnnr IS INITIAL AND
    lt_pp04_cache-arbpl IS INITIAL AND
    ( pi_verid IS INITIAL OR
    pi_mdv01 IS INITIAL ).
    read the last PP04 operation
    CLEAR lt_plpo.
    REFRESH lt_plpo.
    SELECT vornr eobjty eobjid e~arbpl
    INTO CORRESPONDING FIELDS OF TABLE lt_plpo
    FROM plas AS b
    INNER JOIN plpo AS c
    ON bplnty = cplnty AND
    bplnnr = cplnnr AND
    bzaehl = czaehl
    INNER JOIN crhd AS e
    ON carbid = eobjid
    WHERE b~plnty = v_plnty AND
    b~plnnr = lt_pp04_cache-plnnr AND
    b~plnal = lt_pp04_cache-alnag AND
    c~loekz = space AND
    c~steus = v_plpo_steus AND
    e~objty = v_objty AND
    e~werks = lt_pp04_cache-werks AND
    e~verwe = v_verwe.
    SORT lt_plpo BY vornr DESCENDING.
    READ TABLE lt_plpo INDEX 1.
    IF NOT lt_plpo-arbpl IS INITIAL.
    lt_pp04_cache-arbpl = lt_plpo-arbpl.
    read work center description
    SELECT SINGLE ktext INTO lt_pp04_cache-ktext
    FROM crtx WHERE objty = lt_plpo-objty AND
    objid = lt_plpo-objid AND
    spras = sy-langu.
    the following read will get the index of the correct record to be
    updated in the cache
    READ TABLE lt_pp04_cache
    WITH KEY matnr = pi_matnr
    werks = pi_werks
    alnag = pi_alnag.
    MODIFY lt_pp04_cache
    INDEX sy-tabix
    TRANSPORTING arbpl ktext.
    ENDIF.
    ENDIF.
    endform. " read_last_pp04_operation_cache
    *& Form read_wc_on_last_pp04
    form read_wc_on_last_pp04 using pi_plnnr
    pi_plnal
    changing pe_arbpl.
    data: begin of lt_plpo occurs 0,
    vornr like plpo-vornr,
    objty like crhd-objty,
    objid like crhd-objid,
    arbpl like crhd-arbpl,
    end of lt_plpo.
    get all the PP04 operations for the given rtg# & grp-ctr#
    select vornr eobjty eobjid e~arbpl
    into corresponding fields of table lt_plpo
    from plas as b
    inner join plpo as c
    on bplnty = cplnty and
    bplnnr = cplnnr and
    bzaehl = czaehl
    inner join crhd as e
    on carbid = eobjid
    where b~plnty = v_plnty and "Rate Routing
    b~plnnr = pi_plnnr and
    b~plnal = pi_plnal and
    c~loekz = space and "Oper Del Flag = OFF
    c~steus = v_plpo_steus and "PP04
    e~objty = v_objty. "WC Obj Type = 'A'
    read the last operation
    sort lt_plpo by vornr descending.
    read table lt_plpo index 1.
    pe_arbpl = lt_plpo-arbpl.
    endform. " read_wc_on_last_pp04
    Goto LSMW-> Select Direct Input method in 1st step. These are the standard programs for data transfer.
    Otherwise goto SPRO->SAP Reference IMG-> Under this you'll find standard data transfer programs module wise.
    Regards,
    Sunil Kumar Mutyala

  • ADF Form Submision and Commit on Same button

    HI All,
    I have created a jsff which contains 2 forms. My requirement is to create a button which first submits the form and then do the commit too on same button click.
    I am able to do it in 2 steps, as i have Submit button and commit operation available.
    But my query is that Submit is not displayed as any operation, Then how can i write any method in my bean for making sure that i do both the tasks on same button.
    Regards
    Harsh

    Hey John,
    Commit button my page does not gets enabled unless i submit the form.
    Sorry if i am sounding stupid.I am kind of new to ADF .
    Regards
    Harsh

  • HT1364 I have moved my library to an external hard drive and changed the location of the iTunes media folder in Preferences, but every time I close and re-open iTunes, I have to do it all over again.  How can I make the iTunes media folder change permanen

    I have moved my library to an external hard drive and changed the location of the iTunes media folder in Preferences, but every time I close and re-open iTunes, I have to do it all over again.  How can I make the iTunes media folder change permanent?  I have an older machine with Windows XP.

    I don't believe mounting the hard drive should be necessary, unless you have several external drives and want your computer to recognise them as folders, rather than drives. I've never had to mount a hard drive, ever. If you don't know how to do it, then it shouldn't be necessary now.
    Try this:
    Prepare iTunes so that it can see the external drive.
    Make a note of which drive-letter the external drive has been allocated. (Look in Windows Exploer)
    Look at the file location for a song. Make sure it plays (and therefore that iTunes has found it). Highlight it and select File/Get Info/Summary>Where: and make a note of the drive letter for that song.
    Close and shut down the computer.
    The next time you turn the computer on again, connect the external drive
    Before you start iTunes - check the external drive in Windows Explorer. Is it ready, does it have the same drive-letter that it had last time? Can you go into the drive and see the files on it?
    Once you can, start iTunes. (If the drive lettter has changed, you need to work out why before going any further.)
    If iTunes fails to find your external drive, you need to check where iTunes is looking for your Library.
    Select the same song you checked before (presumably iTunes can no longer find it). Follow the procedure for locating it. You should be able to see where iTunes thinks the file is. It's the drive that counts. Which drive letter is iTunes looking at? Is it the same one that it was previously (which should also be the same one that the drive has now).
    What happens, which step do you have problems with?
    Message was edited by: the fiend

  • Need to Return immediately and commit the App Module on a different thread

    I have an action that I want to return fast (immediately) but the server processing takes longer than acceptable. The results of the operation don't matter to the page submitting it and I want it to be able to navigate away even if the operation is not complete. I want to either be able to send a non-blocking server event from the browser or on the server side start a new thread that performs the operation allowing the original thread to return immediately. The new thread would need access to an Application Module in order to commit data. How would I go about accomplishing this?
    Some thoughts
    I've tried creating a ConcurrentLinkedQueue and putting the DataControl on the que, then in the other thread I pull it off the que, process and commit the data. This works unless the page is navigated away from. Then calling dc.getApplicationModule(); returns null.
    I thought about using createRootApplicationModule in the new thread (since the new thread has no context) but don't know how that would work
    This is the code in the run method of the new thread. In this example, I'm adding data to the app module in the original thread and committing the data in a new thread.
    (like I said, it works most of the time.)
    Object[] req = (Object[])que.poll();
    DCDataControl dc = (DCDataControl)req[0];
    try{
    ApplicationModule am = dc.getApplicationModule();
    if (am != null){
    am.getTransaction().commit();
    } else{
    System.out.println("AM:null unable to commit ");
    } catch (Exception e){
    e.printStackTrace();
    finally{
    if (dc!= null){ dc.resetState();} // release app module
    }

    Thanks for the replies. I am aware of the inherent risks of running a separate thread within a managed container.
    The use case is a performance logging operation. We have a internal web app used by a network of franchises with over 1000 users. We log response time and performances statistics to the database. When the user clicks to navigate or commit data, the response time that the user experiences is logged after the page has fully rendered either through a PPR or a full submit. This is done by submitting ADFCustomEvent from javascript on the page after rendering is complete.. The event sends up the time difference from when the user first clicked to when the page was fully rendered. This information is then merged with logged events stored on the users Session that shows the name and response time of every query that was executed during the previous request. Depending on the page this could be up to half dozen to a dozen or more queries. The logging operation as experienced by the browser is generally fast (<200ms) but sometimes can be as long as a second or more when the database gets busy. A half second is too long as makes the app appear sluggish if the user can't type or click immediately after the page has finished rendering. The logged data is aggregated so we know exactly how much of the page load was due to a slow browser/network, how much was database time, webservice call time, etc... If it's due to a slow database we can drill down and see which query is the culprit. These performance metrics are critical to operations and are charted throughout the day so we know exactly what our users are experiencing. All of our users use a custom firefox client that we control. Using this logging framework we were able to determine that upgrading to a Firefox 4.0 based client cut browser render time by more than half a second on average. We can also tell what type of hardware the user is running so can place the blame for poor performance where appropriate. We have determined that pages render considerably faster on Windows 7 than on Windows 98 with the same hardware. We are moving the logging tables off of our exadata database to a separate box to remove that load from the application database. Since we expect the other database not to perform as well we don't want it to affect the user experience, hence the need to log asynchronously. I would like to put the data on a queue and have a background daemon process read from the queue and commit to the database. I would like the daemon thread to be able to use BC components. I would prefer not to resort to using a web service because of the inherent overhead. The logging operation is not a long operation but is of high frequency so should be as streamlined as possible. The load is spread over 6 servers with 4 JVM's each (24 weblogic instances). I know it's possible to use BC components from a plain Servlet (which runs on it's own thread) so what I want is to have something like a servlet thread that loops forever processing my logging queue.
    One other method I am investigating is using my own non-blocking ajax call that callls a servlet to perform the logging. I will need to pull out the timestamp contained within a client side ADF component along with the pages ctrl-state variable that is included with every ADF request as it uses this as the key to get to the data on the session. ADF really needs a non-blocking ADFCustomEvent for this type of request. (send and don't care about the response)
    The client component with the server listener looks like this
    <af:outputText value="#{pageFlowScope.perfClientTS}" visible="false"
    id="perfClientTSField" clientComponent="true">
    <af:serverListener type="logPerfData" method="#{perfLog.logPerfDataAction}"/>
    </af:outputText>
    The script that queues the ajax call after the page loads looks like this
    AdfCustomEvent.queue(perfClientTSField, "logPerfData", {
    typeId : typeId,
    subTypeId : subTypeId,
    responseTime1 : new String(responseTime1),
    responseTime2 : new String(responseTime2),
    openedVia: via
    true);
    I also tried calling the noResponseExpected() method on the event before queuing it but it still blocked the UI and caused an additional side effect in that the client sent two ajax requests instead of one. It somehow thought something on the client side needed to be synced with the server.
    email me and I can send a doc with more details about how our performance logging framework works.
    Edited by: Don Kleppinger on Mar 14, 2012 2:52 PM

  • How to move huge HD video files between external hard drives and defrag ext drive?

    I have huge high definition video files on a 2TB external hard drive (and its clone).  The external hard drive is maxed out.  I would like to move many of the video files to a new 3TB external hard drive (G-drive, and a clone) and leave a sub-group of video files (1+ TB) on the original external hard drive (and its clone).  
    I am copying files from original external drive ("ext drive A") to new external drive ("ext drive B") via Carbon Copy Cloner (selecting iMovie event by event that I want to transfer). Just a note: I do not know how to partition or make bootable drives, I see suggestions with these steps in them.
    My questions:
    1.)  I assume this transfer of files will create extreme fragmentation on drive A.  Should I reformat/re-initialize ext drive A after moving the files I want?  If so, how best to do this?  Do I use "Erase" within Disk Utilities?  Do I need to do anything else before transfering files back onto ext drive A from its clone?
    2.) Do I also need to defrag if I reformat ext drive A? Do I defrag instead of or in addition to reformating?  If so, how to do this? I've read on these forums so many warnings and heard too many stories of this going awry.  Which 3rd party software to use? 
    Thank you in advance for any suggestions, tips, advice.  This whole process makes me SO nervous.

    Here is a very good writeup on de-fragging in the OS environment that I borrowed
    From Klaus1:
    Defragmentation in OS X:
    http://support.apple.com/kb/HT1375  which states:
    You probably won't need to optimize at all if you use Mac OS X. Here's why:
    Hard disk capacity is generally much greater now than a few years ago. With more free space available, the file system doesn't need to fill up every "nook and cranny." Mac OS Extended formatting (HFS Plus) avoids reusing space from deleted files as much as possible, to avoid prematurely filling small areas of recently-freed space.
    Mac OS X 10.2 and later includes delayed allocation for Mac OS X Extended-formatted volumes. This allows a number of small allocations to be combined into a single large allocation in one area of the disk.
    Fragmentation was often caused by continually appending data to existing files, especially with resource forks. With faster hard drives and better caching, as well as the new application packaging format, many applications simply rewrite the entire file each time. Mac OS X 10.3 onwards can also automatically defragment such slow-growing files. This process is sometimes known as "Hot-File-Adaptive-Clustering."
    Aggressive read-ahead and write-behind caching means that minor fragmentation has less effect on perceived system performance.
    Whilst 'defragging' OS X is rarely necessary, Rod Hagen has produced this excellent analysis of the situation which is worth reading:
    Most users, as long as they leave plenty of free space available , and don't work regularly in situations where very large files are written and rewritten, are unlikely to notice the effects of fragmentation on either their files or on the drives free space much.
    As the drive fills the situations becomes progressively more significant, however.
    Some people will tell you that "OSX defrags your files anyway". This is only partly true. It defrags files that are less than 20 MB in size. It doesn't defrag larger files and it doesn't defrag the free space on the drive. In fact the method it uses to defrag the smaller files actually increases the extent of free space fragmentation. Eventually, in fact, once the largest free space fragments are down to less than 20 MB (not uncommon on a drive that has , say only 10% free space left) it begins to give up trying to defrag altogether. Despite this, the system copes very well without defragging as long as you have plenty of room.
    Again, this doesn't matter much when the drive is half empty or better, but it does when it gets fullish, and it does especially when it gets fullish if you are regularly dealing with large files , like video or serious audio stuff.
    If you look through this discussion board you will see quite a few complaints from people who find that their drive gets "slow". Often you will see that say that "still have 10 or 20 gigs free" or the like. On modern large drives by this stage they are usually in fact down to the point where the internal defragmentation routines can no longer operate , where their drives are working like navvies to keep up with finding space for any larger files, together with room for "scratch files", virtual memory, directories etc etc etc. Such users are operating in a zone where they put a lot more stress on their drives as a result, often start complaining of increased "heat", etc etc. Most obviously, though, the computer slows down to a speed not much better than that of molasses. Eventually the directories and other related files may collapse altogether and they find themselves with a next to unrecoverable disk problems.
    By this time, of course, defragging itself has already become just about impossible. The amount of work required to shift the data into contiguous blocks is immense, puts additional stress on the drive, takes forever, etc etc. The extent of fragmentation of free space at this stage can be simply staggering, and any large files you subsequently write are likely to be divided into many , many tens of thousands of fragments scattered across the drive. Not only this, but things like the "extents files", which record where all the bits are located, will begin to grow astronomically as a result, putting even more pressure on your already stressed drive, and increasing the risk of major failures.
    Ultimately this adds up to a situation where you can identify maybe three "phases" of mac life when it comes to the need for defragmentation.
    In the "first phase" (with your drive less than half full), it doesn't matter much at all - probably not enough to even make it worth doing.
    In the "second phase" (between , say 50% free space and 20% free space remaining) it becomes progressively more useful, but , depending on the use you put your computer to you won't see much difference at the higher levels of free space unless you are serious video buff who needs to keep their drives operating as efficiently and fast as possible - chances are they will be using fast external drives over FW800 or eSata to compliment their internal HD anyway.
    At the lower end though (when boot drives get down around the 20% mark on , say, a 250 or 500 Gig drive) I certainly begin to see an impact on performance and stability when working with large image files, mapping software, and the like, especially those which rely on the use of their own "scratch" files, and especially in situations where I am using multiple applications simultaneously, if I haven't defragmented the drive for a while. For me, defragmenting (I use iDefrag too - it is the only third party app I trust for this after seeing people with problems using TechToolPro and Drive Genius for such things) gives a substantial performance boost in this sort of situation and improves operational stability. I usually try to get in first these days and defrag more regularly (about once a month) when the drive is down to 30% free space or lower.
    Between 20% and 10% free space is a bit of a "doubtful region". Most people will still be able to defrag successfully in this sort of area, though the time taken and the risks associated increase as the free space declines. My own advice to people in this sort of area is that they start choosing their new , bigger HD, because they obviously are going to need one very soon, and try to "clear the decks" so that they maintain that 20% free buffer until they do. Defragging regularly (perhaps even once a fortnight) will actually benefit them substantially during this "phase", but maybe doing so will lull them into a false sense of security and keep them from seriously recognising that they need to be moving to a bigger HD!
    Once they are down to that last ten per cent of free space, though, they are treading on glass. Free space fragmentation at least will already be a serious issue on their computers but if they try to defrag with a utility without first making substantially more space available then they may find it runs into problems or is so slow that they give up half way through and do the damage themselves, especially if they are using one of the less "forgiving" utilities!
    In this case I think the best way to proceed is to clone the internal drive to a larger external with SuperDuper, replace the internal drive with a larger one and then clone back to it. No-one down to the last ten percent of their drive really has enough room to move. Defragging it will certainly speed it up, and may even save them from major problems briefly, but we all know that before too long they are going to be in the same situation again. Better to deal with the matter properly and replace the drive with something more akin to their real needs once this point is reached. Heck, big HDs are as cheap as chips these days! It is mad to struggle on with sluggish performance, instability, and the possible risk of losing the lot, in such a situation.

  • Itunes: Intertwining external hard drive and PC hard drive?

    I just moved my iTunes library from my old computer to my new one. But there's a problem. I want to have my computer rely on its own hard drive for the storage of my iTunes, but it currently relies on my external hard drive. I would also like to have new songs simultaneously downloaded on the external hard drive and the PC's hard drive, when the external hard drive is plugged in. When its not plugged in and I have new content downloaded to iTunes, it would store it on the PC's hard drive, and later when the external hard drive is replugged in, it will download new content onto it as an automatic update. Is there anyway I can successfully do this? Any help is appreciated.
    'Keep iTunes media folder organized' and 'copy files to iTunes media folder when adding to library' are enabled. I have been trying to enable 'consolidate files', but it will not stay enabled and it will not do anything when enabled. Again, any help is appreciated. Thank you.

    To move your library to the internal hard drive, copy the whole library folder structure (which includes the iTunes database - the iTunes Library.itl file - as well as your media).  If you currently have the database on the internal drive and only your media on the external one, see turingtest2's tip on Make a split library portable for the procedure that will bring everything together under one location.
    There is nothing in iTunes that supports the "Intertwining" of two drives that you describe.  The best approach is to use the internal drive as your "master" drive and regularly synchronize with the external one - many find Microsoft's SyncToy tool an effective method for this.
    "Consolidate files" is not a setting - it is a one-off operation that will copy any media files that are currently outside the standard iTunes folder structure into that structure; again see turingtest2's notes referenced above for the scenarios where you may need to do this in order to create a well-formed, manageable library structure.

Maybe you are looking for

  • How to create files with read/write privileges for everyone?

    I have two iMacs 7,1 (one with Snow Leopard and the other with Mountain Lion) in a local area wireless network. I have shared the "documents" folder in the Snow Leopard iMac in order to have files available to the other iMac. The folder has read/writ

  • Why is the video playback on the 160 g ipod not as clear as the 80 gig ipod?

    I recently got a new ipod 160 g classic. The video playback is not near as clear as it was on my 80 g classic. Is there some setting i have wrong. The video is washed out and has a yellow tint. I even replaced the unit, but it still looks the same.

  • Intel Mac Mini, CPU nap?

    Hi I have had a Mini 1.66 since April. It has never napped the CPU when idle. I read somewhere that the Coreduo could not nap, but recently someone said his did. I wonder if you could share your experiences on this topic? It is easy to see if it is w

  • How do you convert aiff files to aac and keep them organized in new folders

    I have spent several hours trying to figure out how to convert my thousands of aiff files to aac files.  I want to keept the originals and have the new aac files created and then stored in a file system identical but separate to the one that stores t

  • Creating a print ad in Pages

    I'm tasked with creating an 8x10 (full page) black and white ad for either a magazine or newspaper. Is there a recommended Pages template for creating such an ad? Also, is there a way to convert the finished product to black and white? Thanks for any