AQ Callbacks - Blocking background processes and best practices.

We are running several queues within our company and one of them uses pl/sql callback functionnality.
Basically, several triggers can enqueue a message when underlying tables are updated. The goal of the callback is to "treat" those messages, which means dequeuing the messages and passing them to some procedure (determined through a confguration table and the type of message). I don't know if i'm clear enough on this but it is as important.
In general the mechanism works perfectly but we noticed in one of our databases that, after a relatively big amount of messages enqueuing (~ 500-1000 in one trasanction), there are numerous background processes blocked on system table SYS.AQ_SRVNTFN_TABLEI. In fact, the queue starts growing and messages are not dequeued anymore by the callback, that doesn't seem to be executed anymore at all.
We actually were also unable to re-compile the package that holds the callback procedure nor removing/adding the reference of the subscriber to the queue with DBMS_AQ. It would hang there forever...
We are running Oracle Database 10g Enterprise Edition Release *10.2.0.4.0* and the value of AQ_TM_PROCESSES = 0.
I don't necesseraly have a complete and clear view of how the queue mechanism works behind the scenes so forgive for some foolish things I could say :-)
Diagnosis:
After some research, it seems the backgrounds processes are blocked on the system table SYS.AQ_SRVNTFN_TABLE on some index for MSGID.
If I understand correctly how the system works, the callback gets executed for a specific MSGID as we use it in the callback procedure to dequeue this message.
I've also discovered that the default value for the WAIT dequeue option is FOREVER...
So my idea was that, for some reason, the callback tries to dequeue a message that does not exist in the queue and... waits forever for the message to "appear".
This at first seemed pretty unlikely to me: why would the callback be executed if no message with provided MSGID exists... then you start doubting :-)
Attempt to resolve:
We've decided to alter the WAIT option to not let the dequeue in the callback wait forever.
We have made some tests (outside the callback) with NO_WAIT and also with a wait of a few seconds. Both solution prooved right so we added a wait of 60s in the callback and added some tracing.
Since then, no more background processes seems to hang there and we are able to alter the callback procedure normally. But with the added tracing, we noticed an unexplained behavior with the execution of the callback:
- callback runs
- ORA-25263: no message in queue QUEUE_OWNER.MULTISRC_NOTIFQ with message ID B4FFD1115523A46EE040007F0100304F
- callback runs again and message is dequeued and treated correctly
The first oracle error happens fast, not after 60s as the WAIT option specifies.
Questions:
<li>Is the way the callback method is implemented correct (see code below) ?</li>
<li>Do we need to commit in the callback method ? What implies committing or not ?</li>
<li>How wouold you explain the behavior of the callback after setting the dequeue WAIT to 60s and why it does not actually wait for 60s ?</li>
<>
<>
<>
<>
The configuration of the queue and the callback is as follow:
- the type of the payload is SYS.ANYDATA
- an extra index is create on CORRID column
begin
   -- create queue table
   begin
      DBMS_AQADM.CREATE_QUEUE_TABLE(QUEUE_TABLE        => 'QUEUE_OWNER.MULTISRC_NOTIFTAB'
                                   ,QUEUE_PAYLOAD_TYPE => 'SYS.ANYDATA'
                                   ,MULTIPLE_CONSUMERS => true);
   end;
   -- create and start queue
   begin DBMS_AQADM.CREATE_QUEUE(QUEUE_NAME => 'QUEUE_OWNER.MULTISRC_NOTIFQ', QUEUE_TABLE => 'QUEUE_OWNER.MULTISRC_NOTIFTAB'); end;
   begin DBMS_AQADM.START_QUEUE(QUEUE_NAME => 'QUEUE_OWNER.MULTISRC_NOTIFQ'); end;
   -- grant access to the queue to PDO
   -- add a subscriber to the queue and register the plsql to execute
   begin
      DBMS_AQADM.ADD_SUBSCRIBER(QUEUE_NAME => 'QUEUE_OWNER.MULTISRC_NOTIFQ'
                               ,SUBSCRIBER => SYS.AQ$_AGENT('MULTISRC_NOTIFSUBSCR', null, null));
      DBMS_AQ.REGISTER(SYS.AQ$_REG_INFO_LIST(SYS.AQ$_REG_INFO('QUEUE_OWNER.MULTISRC_NOTIFQ:MULTISRC_NOTIFSUBSCR'
                                                             ,DBMS_AQ.NAMESPACE_AQ
                                                             ,'plsql://PDO.PO_NOTIFY.MULTISRC_NOTIF_SUBSCRIBER?PR=0'
                                                             ,HEXTORAW('FF')))
                      ,1);
   end;
end;
create index queue_owner.multisrcq_corrid on queue_owner.multisrc_notiftab (CORRID)
The callback procedure is as follow:
procedure MULTISRC_NOTIF_SUBSCRIBER(context  raw,
                                      REGINFO  SYS.AQ$_REG_INFO,
                                      DESCR    SYS.AQ$_DESCRIPTOR,
                                      PAYLOAD  raw,
                                      PAYLOADL number) is
    L_METHOD constant varchar2(50) := 'MULTISRC_NOTIF_SUBSCRIBER';
    R_DEQUEUE_OPTIONS    DBMS_AQ.DEQUEUE_OPTIONS_T;
    R_MESSAGE_PROPERTIES DBMS_AQ.MESSAGE_PROPERTIES_T;
    V_MESSAGE_HANDLE     raw(16);
    O_PAYLOAD            ANYDATA;
    cursor C_TREATMENTS(P_ENTITY in varchar2) is
      select T.MNOT_PROCEDURE_C
        from TA_GEN.MULTISRC_NOTIF_TREATMENTS T
       where T.MNOT_ENTITY_C = P_ENTITY
         and T.MNOT_BEGIN_D < sysdate
         and ((T.MNOT_END_D > sysdate and T.MNOT_END_D is not null) or
             (T.MNOT_END_D is null))
       order by T.MNOT_PRIORITY_N asc;
    WK_CORRID    varchar2(128);
    WK_ENTITY    TA_GEN.MULTISRC_NOTIF_TREATMENTS.MNOT_ENTITY_C%type;
    WK_PROCEDURE TA_GEN.MULTISRC_NOTIF_TREATMENTS.MNOT_PROCEDURE_C%type;
    CT_EXEC number(2) := 0;
  begin
    -- DGH: 15.12.11 / added a wait of 60 seconds for dequeue to avoid infinite waiting (default is FOREVER)
    R_DEQUEUE_OPTIONS.WAIT := 60;
    -- dequeue message
    R_DEQUEUE_OPTIONS.MSGID         := DESCR.MSG_ID;
    R_DEQUEUE_OPTIONS.CONSUMER_NAME := DESCR.CONSUMER_NAME;
    DBMS_AQ.DEQUEUE(QUEUE_NAME         => DESCR.QUEUE_NAME,
                    DEQUEUE_OPTIONS    => R_DEQUEUE_OPTIONS,
                    MESSAGE_PROPERTIES => R_MESSAGE_PROPERTIES,
                    PAYLOAD            => O_PAYLOAD,
                    MSGID              => V_MESSAGE_HANDLE);
    -- extract entity name
    WK_CORRID := R_MESSAGE_PROPERTIES.CORRELATION;
    WK_ENTITY := SUBSTR(WK_CORRID,
                        INSTR(WK_CORRID, '##') + 2,
                        (INSTR(WK_CORRID, '##', 1, 2) -
                        INSTR(WK_CORRID, '##')) - 2);
    -- execute treatment(s)
    open C_TREATMENTS(WK_ENTITY);
    loop
      fetch C_TREATMENTS
        into WK_PROCEDURE;
      exit when C_TREATMENTS%notfound;
      execute immediate 'begin ' || WK_PROCEDURE || '(:MSG); end;'
        using O_PAYLOAD;
      CT_EXEC := CT_EXEC + 1;
    end loop;
    close C_TREATMENTS;
  exception
    when others then
      if C_TREATMENTS%isopen then close C_TREATMENTS; end if;
      PO_NOTIFY.TRACE_MULTISRC_NOTIF(L_METHOD,
                                     sqlerrm,
                                     DESCR.MSG_ID,
                                     R_MESSAGE_PROPERTIES.CORRELATION,
                                     WK_ENTITY,
                                     WK_PROCEDURE);
      rollback;
  end MULTISRC_NOTIF_SUBSCRIBER;

Helping you with the specific issue is going to be difficult without direct access to the servers but given the importance this system seems to have to your business why are you not running on a fully supported version (10.2 has been in extended support for more than 6 months) and even in the current configuration not patched to 10.2.0.5?
My instinct would be to focus on moving to 11.2.0.3 as quickly as possible with a corresponding change to a current operating system version if your O/S is similarly out of date.

Similar Messages

  • Can anyone recommend tips and best practices for FrameMaker-to-RoboHelp migration ?

    Hi. I'm planning a migration from FM (unstructured) to RH. I'd appreciate any tips and best practices for the migration process. (Note that at the moment I plan to import the FM documents into, not link to them from, RH.)
    For example, my current FM files are presently not optimally "chunked", so that autoconverting FM file sections (based on, say, Header 1 paragraph layout) won't always result in an optimal topic set. I'm thinking of going through the FM docs and inserting dummy paragraphs with a tag somethike like "topic_break", placed in more appropriate locations that the existing headers. Then, during import to RH, I'd use the topic_break paragraph to demark the topics. Is this a good technique? Beyond paragraph-based import delineation, do you know of any guidelines for redrafting FM chapter file content into RH topics?
    Also, are there any considerations/gotchas in the areas of text review workflow, multiple authoring, etc. after the migration? (I've not managed an ongoing RH doc project before, so any advice would be greatly appreciated.
    Thanks in advance!
    -Kurt
    BTW, the main reason for the migration: Info is presently scattered in various (and way to many) PDF files. There's no global index. I'd like to make a RoboHelp HTML interface (probably WebHelp layout) so it can be a one-stop documentation shop for users.

    Jeff
    Fm may produce better output for your requirements but for many what Rh produces works just fine. My recent finding re Word converting images to JPG before import will mean a better experience for many.
    Once Rh is set up, and it's not difficult, for many its printed documents will do the job. I would say try it and then judge.
    See www.grainge.org for RoboHelp and Authoring tips
    @petergrainge

  • FWSM interface monitoring and best practices documentation.

    Hello everyone
     I have a couple of questions regarding vlan interface monitoring and best practices specifically for this service module.
     I couldn’t find a suggestion or guideline as for how to define a VLAN interface on a management station. The FWSM total throughput is 5.5gbs and the interfaces are mapped to vlans carried on trunks over 10gb etherchannels. Is there a common practice, or past experience, to set some physical parameters to logical interfaces? "show interface" command states BW as unknown.
     Additionally, do any of you have a document addressing best practices for FWSM? I have this for other platforms and general recommendations based on newer ASA versions but nothing related to FWSM.
    Thanks a lot!
    Regards
    Guido

    Hi,
    If you are looking for some more command to check for the throughput through the module:-
    show firewall module <number> traffic
    Also , I think as this is End of life , you might have to check for some old documentation from Cisco on the best practices.
    http://www.cisco.com/c/en/us/products/collateral/switches/catalyst-6500-series-switches/prod_white_paper0900aecd805457cc.html
    https://supportforums.cisco.com/discussion/11540181/ask-expertconfiguring-troubleshooting-best-practices-asa-fwsm-failover
    Thanks and Regards,
    Vibhor Amrodia

  • EP Naming Conventions and Best Practices

    Hi all
    Please provide me EP Naming Conventions and Best Practices documents
    Thanks
    Vijay

    Hi Daya,
    For SAP Best Practices for Portal, read thru these documents :-
    [SAP Best Practices for Portal - doc 1 |http://help.sap.com/bp_epv170/EP_US/HTML/Portals_intro.htm]
    [SAP Best Practices for EP |http://www.sap.com/services/pdf/BWP_SAP_Best_Practices_for_Enterprise_Portals.pdf]
    And for Naming Conventions in EP, please go through these two links:-
    [Naming Conventions in EP|naming standards;
    [EP Naming Conventions |https://websmp210.sap-ag.de/~sapidb/011000358700005875762004E]
    Hope this helps,
    Regards,
    Shailesh
    Edited by: Shailesh Kumar Nagar on May 30, 2008 4:09 PM

  • EP Naming Conventions and Best Practices documents

    Hi all
    Please provide me EP Naming Conventions and Best Practices documents
    Thanks
    Vijay

    Hi,
    Check this:
    Best Practices in EP
    http://help.sap.com/saphelp_nw04/helpdata/en/43/6d9b6eaccc7101e10000000a1553f7/frameset.htm
    Regards,
    Praveen Gudapati

  • Adobe LiveCycle Process Management Overview and Best Practices

    To get familiar with the best practices of process management watch this recording of a webinar hosted by Avoka Technologies.

    To get familiar with the best practices of process management watch this recording of a webinar hosted by Avoka Technologies.

  • Large heap sizes, GC tuning and best practices

    Hello,
    I’ve read in the best practices document that the recommended heap size (without JVM GC tuning) is 512M. It also indicates that GC tuning, object number/size, and hardware configuration play a significant role in determining what the optimal heap size is. My particular Coherence implementation contains a static data set that is fairly large in size (150-300k per entry). Our hardware platform contains 16G physical RAM available and we want to dedicate at least 1G to the system and 512M for a proxy instance (localstorage=false) which our TCP*Extend clients will use to connect to the cache. This leaves us 14.5G available for our cache instances.
    We’re trying to determine the proper balance of heap size vs num of cache instances and have ended up with the following configuration. 7 cache instances per node running with 2G heap using a high-units value of 1.5G. Our testing has shown that using the Concurrent Mark Sweep GC algorithm warrants no substantial GC pauses and we have also done testing with a heap fragmentation inducer (http://www.azulsystems.com/e2e/docs/Fragger.java) which also shows no significant pauses.
    The reason we opted for a larger heap was to cut down on the cluster communication and context switching overhead as well as the administration challenges that 28 separate JVM processes would create. Although our testing has shown successful results, my concern here is that we’re straying from the best practices recommendations and I’m wondering what others thoughts are about the configuration outlined above.
    Thanks,
    - Allen Bettilyon

    Hello,
    I’ve read in the best practices document that the recommended heap size (without JVM GC tuning) is 512M. It also indicates that GC tuning, object number/size, and hardware configuration play a significant role in determining what the optimal heap size is. My particular Coherence implementation contains a static data set that is fairly large in size (150-300k per entry). Our hardware platform contains 16G physical RAM available and we want to dedicate at least 1G to the system and 512M for a proxy instance (localstorage=false) which our TCP*Extend clients will use to connect to the cache. This leaves us 14.5G available for our cache instances.
    We’re trying to determine the proper balance of heap size vs num of cache instances and have ended up with the following configuration. 7 cache instances per node running with 2G heap using a high-units value of 1.5G. Our testing has shown that using the Concurrent Mark Sweep GC algorithm warrants no substantial GC pauses and we have also done testing with a heap fragmentation inducer (http://www.azulsystems.com/e2e/docs/Fragger.java) which also shows no significant pauses.
    The reason we opted for a larger heap was to cut down on the cluster communication and context switching overhead as well as the administration challenges that 28 separate JVM processes would create. Although our testing has shown successful results, my concern here is that we’re straying from the best practices recommendations and I’m wondering what others thoughts are about the configuration outlined above.
    Thanks,
    - Allen Bettilyon

  • Installation and best practices

    I saw this link being discussed in a thread about "Live Type," but I think it needs a thread of its own, so I'm going to begin it here.
    http://support.apple.com/kb/HT4722?viewlocale=en_US
    I have Motion 4 (and everything else with FCS 2, of course), and just purchased Motion 5 via the App Store. (I'm sure I'll be buying FCP X also at some point, but decided to hold off for now.)
    When I was reading the "Live Type" thread there was some discussion about Motion 5 overwriting Motion 4 projects or something like that, so I started freaking out. I've opened both 5 and 4, but am closing them until I understand what's going on.
    Since I purchased Motion 5 from the App Store, I'm just under the assumption that my Mac took care of everything correctly. I see that Motion 4 resides in the FCS folder and Motion 5 is a stand-alone in the Applications folder.
    So I guess my questions are these ...
    1) What's so important about having FCS 2009 on a separate drive? I have a couple other internal drives with more than enough lots and lots of free space, so that isn't an issue for me. I just wonder why this is a "best practice." The two programs CAN share the same drive ...the link says so.
    2) I supppose that I'll let 4 and 5 reside side by side for now. How do I make sure Motion 5 won't screw up my Motion 4 projects? (My hunch is that you can open a M4 project in M5 and do a "save as" ...this will create an M5 version and leave the M4 alone. Am I correct about that?) Maybe the answer to this is related to my first question.
    3) I want to make sure I'm not missing something my the words "startup disk." Although I have 3 drives in my MacPro, only is a "startup disk" ...the other two are for storage. If I move everything from FCS to a different internal drive, does it make any difference that the destination drive is NOT a starup disk.
    **I'm gonna separate this part out a bit because it may or may not be related to the previous quesitons.**
    I noticed the Motion 5 came with very little content and only a few templates, but I read in another thread that additional content/t can be downloaded free when I do an update. I also read that thread that this free content is pretty much the same as the content that I have with Motion 4.
    1) If I download this additional content (which is basically the same as what's in Motion 4), will I just have a duplicate of all that material?
    2) Could this be part of the reason that Apple reccomends that Motion 5 be on a separate drive ...so that the content and templates don't get mixed up?
    --Just a couple months ago, I finally got around to cleaning out all the FCS content, throwing away duplicates and organinzing thins properly. If I've got to got through this process again, I want to do it correclty the first time.

    When you install Motion 5 or FCP X, all your Final Cut Studio apps are moved into a folder called Final Cut Studio.  This is because you can't have two apps with the same name in the same folder.  I'm running them both on the same drive, no problems.
    Motion 5 does not automatically overwrite any Motion project files, that is hogwash.  When you open a v.4 file into 5, it will ask if you want to convert the original to 5, or open a copy called Untilted and make it a v.5 project.  Very simple.  If you're super paranoid, Duplicate the original Motion project file, and open the copy into v.5 to be extra safe.  Remember once a project file is version 5, it can't be opened into previous versions.
    You can't launch both at the same time, duh.
    The System Drive, or OS drive, is just that, the drive your operating system is installed on.  All applicactions should be on that drive, and NOT be moved to other drives.  Especiall pro apps like these.  Move them to a non-OS drive, and you'll regret it.  Trust me.
    Yes, run Software Update (Apple Menu) and you'll get additional content for Motion 5 that v.4 doesn't have.  It won't be any problem with space on your drive.  That stuff takes up very little space.
    Apple recommends two different OS drives, or partitions, only to avoid an overwhelming flood of people screaming "What happened to my Final Cut Studio legacy apps?" and other such problems.  Hey, they're put into a new folder, that's all, breath...
    If you're having excessive problems, you may not have hardware up to speed.  CPU speed is needed, at least 8GB RAM (if not 12 or 16 for serious work), but your graphics card needs to really be up to speed.  iMacs and MacBook Pros barely meet up, and will work well.  Mac Pros can get much more powerful graphics cards.  Airs and Minis should be avoided like the plauge.
    After checking hardware, be sure to run Disk Utility to "repair" all drives.  Then, get the free app "Preference Manager" by Digital Rebellion (dot com) to safely trash your app's preference files, which resets it, and can fix a lot of current bugs.

  • APO - Aggregate Forecasting and Best Practices

    I am interested in options to save time and improve accuracy of forecasts in APO Demand Planning.   Can anyone help me by recommending best practices, processes and design that they have seen work well?
    We currently forecast at the Product level (detailed).  We are considering changing that to the Product Family level.   If you have done this, please reply.

    Hello Dan -
    Doing it at the Product level is very detailed (but it depends on the number of SKU's Available and that are to be forecasted).
    Here for me on my project we have about sample size 5000 finished goods to start with and forecasting at that minute level wont help. For me i have classified a product group level where i have linked all the similar FGs. tht way when you are working with the product group you are able to be a little from assertive in your forecasting. After that you can use proportional factors that will help you in allocating the necessary forecast down to product level. This way you are happy as well as the management is happy (high level report) as they dont have to go thru all the data that is not even necessary for them to see.
    Hope this helps.
    Regards,
    Suresh Garg

  • Oracle EPM 11.1.2.3 Hardware Requirement and best practice

    Hello,
    Could anyone help me find the Minimum Hardware Requirement for the Oracle EPM 11.1.2.3 on the Windows 2008R2 Server? What's best practice to get the optimum performance after the default configuration i.e. modify or look for the entries that need to be modified based on the hardware resource (CPU and RAM) and number of users accessing the Hyperion reports/files.
    Thanks,
    Yash

    Why would you want to know the minimum requirements, surely it would be best to have optimal server specs, the nearest you are going to get is contained in the standard deployment guide - About Standard Deployment
    Saying that it is not possibly to provide stats based on nothing, you would really need to undertake a technical design review/workshop as there many topics to cover before coming up with server information.
    Cheers
    John

  • Static NAT refresh and best practice with inside and DMZ

    I've been out of the firewall game for a while and now have been re-tasked with some configuration, both updating ASA's to 8.4 and making some new services avaiable. So I've dug into refreshing my knowledge of NAT operation and have a question based on best practice and would like a sanity check.
    This is a very basic, I apologize in advance. I just need the cobwebs dusted off.
    The scenario is this: If I have an SQL server on an inside network that a DMZ host needs access to, is it best to present the inside (SQL server in this example) IP via static to the DMZ or the DMZ (SQL client in this example) with static to the inside?
    I think its to present the higher security resource into the lower security network. For example, when a service from the DMZ is made available to the outside/public, the real IP from the higher security interface is mapped to the lower.
    So I would think the same would apply to the inside/DMZ, making 'static (inside,dmz)' the 'proper' method for the pre 8.3 and this for 8.3 and up:
    object network insideSQLIP
    host xx.xx.xx.xx
    nat (inside,dmz) static yy.yy.yy.yy
    Am I on the right track?

    Hello Rgnelson,
    It is not related to the security level of the zone, instead, it is how should the behavior be, what I mean is, for
    nat (inside,dmz) static yy.yy.yy.yy
    - Any traffic hitting translated address yy.yy.yy.yy on the dmz zone should be re-directed to the host xx.xx.xx.xx on the inside interface.
    - Traffic initiated from the real host xx.xx.xx.xx should be translated to yy.yy.yy.yy if the hosts accesses any resources on the DMZ Interface.
    If you reverse it to (dmz,inside) the behavior will be reversed as well, so If you need to translate the address from the DMZ interface going to the inside interface you should use the (dmz,inside).
    For your case I would say what is common, since the server is in the INSIDE zone, you should configure
    object network insideSQLIP
    host xx.xx.xx.xx
    nat (inside,dmz) static yy.yy.yy.yy
    At this time, users from the DMZ zone will be able to access the server using the yy.yy.yy.yy IP Address.
    HTH
    AMatahen

  • Not a question, but a suggestion on updating software and best practice (Adobe we need to create stickies for the forums)

    Lots of you are hitting the brick wall in updating, and end result is non-recoverable project.   In a production environment and with projects due, it's best that you never update while in the middle of projects.  Wait until you have a day or two of down time, then test.
    For best practice, get into the habit of saving off your projects to a new name by incremental versions.  i.e. "project_name_v001", v002, etc.
    Before you close a project, save it, then save it again to a new version. In this way you'll always have two copies and will not loose the entire project.  Most projects crash upon opening (at least in my experience).
    At the end of the day, copy off your current project to an external drive.  I have a 1TB USB3 drive for this purpose, but you can just as easily save off just the PPro, AE and PS files to a stick.  If the video corrupts, you can always re-ingest.
    Which leads us to the next tip: never clear off your cards or wipe the tapes until the project is archived.  Always cheaper to buy more memory than recouping lost hours of work, and your sanity.
    I've been doing this for over a decade and the number of projects I've lost?  Zero.  Have I crashed?  Oh, yeah.  But I just open the previous version, save a new one and resume the edit.

    Ctrl + B to show the Top Menu
    View > Show Sidebar
    View > Show Staus Bar
    Deactivate Search Entire Library to speed things up.
    This should make managing your iPhone the same as it was before.

  • Vendor Prepayment Process -  SAP best practice

    Hi Friends,
    I am looking for SAP Best Practices for Vendor Prepayment process.
    Could you guide me step by step approach..
    Is there any config ?? I have maintained Prepayment Indicator in Vendor Master..
    Please throw some light..
    Regards,
    Jackie

    Hi
    check following link
    [http://help.sap.com/bp_mediav1600/Media_US/HTML/Scenarios/V1B_MED_Scen_EN_US.htm]
    [http://help.sap.com/bp_mediav1600/Media_US/HTML/Scenarios/V1C_MED_Scen_EN_US.htm]
    Regards
    Kailas Ugale

  • Clarification of background processing and BAPI_DOCUMENT_CREATE2

    Hello,
    i am sorry to mention this long discussed topic,
    but i would very appreciate to clarify how can i checkin new originals for a new document using background processing.
    we are trying to use BAPI_DOCUMENT_CREATE2 to create a document and checkin original from a network path +
    server\file.pdf+ .
    i have read that i should add the path in AL11 - did it, how should i use it?
    i have read that i should use the SAPFTPA & SAPHTTPA parameters when calling the function - i have these in SM59 and the connection test is successful - but what should i do with them?
    will be very thankful for your help.
    here's an example from our code:
      call function 'BAPI_DOCUMENT_CREATE2'
        exporting
          documentdata         = ls_doc
         pf_ftp_dest          = 'SAPFTPA'
         pf_http_dest         = 'SAPHTTPA'
        importing
          documenttype         = lf_doctype
          documentnumber       = lf_docnumber
          documentpart         = lf_docpart
          documentversion      = lf_docversion
          return               = ls_return
        tables
          documentdescriptions = lt_drat
          objectlinks          = lt_drad
          documentfiles        = lt_files.

    Hi,
    Populate the document original information according as per below guideline.
    The tables 'DOCUMENTFILES' contain the entries about originals that you you want to allocate to the document. The meanings of the structure fields are:
    'SOURCEDATACARRIER': Data carrier name (optional) of a server on which
                         an original is stored
    'STORAGECATEGORY':   Name of the data carrier that you want to check
                         original into. The function module determines the
                         storage type, SAP Database, vault, or archive
                         using the name  of the data carrier.
    'DOCFILE':           File name of the original
    'WSAPPLICATION':     Name of the workstation application. This entry is
                         necessary when no workstation application has been
                         assigned to the document info record or the new
                         original is of another type.
    Regards
    Keerthi

  • Threaded VI, Threaded Sequence inter process communication best practice

    I'm curious if others have any advice on best practice for the following.
    We have VIs that run continuously during our test execution sampling 4 different Analog Input boards in continous mode.  For quite awhile these VIs were passed the SequenceContext from the TS side along with the name of FileGlobals.  When the while loops had new samples from the boards it would use the SequenceContext to set them from the VI.  It would also use the SequenceContext to get "signals" from test stand.  That is, I'd set a variable on the TestStand side that would be read in the VI to make it behave differently (filtering, moving average, etc).  I used TS notifiers from the LV side to let TS know new data was available and there are TS rendezvous's used to partially synchronize the 4 seperate boards with each other.
    We never really had any trouble with this and even though the code wasn't terribly complicated for me, others on the team have had a little trouble comprehending it.
    Recently, we ran into some issues not related to this code but ultimately in an attempt to solve them I ended up re-writing this code to use functional globals.  Now, the VI uses the FGs to set the data and on the TestStand side the same FGs are used for reading.  Locking of data is handled by the non-rentrant VIs.  The FGs are also structured state machine-like for signaling the VI to do something and letting TS know something has happened.
    This code seems to work well also and I think my team (and anyone that follows) will be able to follow this code a little bit easier.
    I'm not classically trained so I suspect there may be standard patterns that work here, I just haven't sought them out.  
    Is there a better way to do this that I'm not thinking of?  

    Hi SmokeMonster,
    If things are looking more legible and everything functioning as desired it sounds like the improvements are going to be some of the best for your company. Without more specific information about seeing how information is being passed and handled, and a greater knowledge of the system and desired results, specific recommendations are going to be more difficult.
    From a higher level overview, the best resources you will find on development practices will be the NI TestStand Materials. A few that may be worth a look specific to your inquiry would be the following:
    Using LabVIEW with TestStand
    http://digital.ni.com/manuals.nsf/websearch/828A615BAB7CCECA86257A150065C12E
    TestStand System and Architecture Overview
    http://digital.ni.com/manuals.nsf/websearch/49D1BDF3B02279A8862579FB0063EC04
    TestStand Advanced Architecture Series
    http://www.ni.com/white-paper/7022/en
    Hopefully that gets you some good starting points to refer from.
    Regards,
    James W.
    Applications Engineer
    National Instruments

Maybe you are looking for

  • Error while sending FAX using FM

    Hi All, I am trying to send fax using FM “SO_NEW_DOCUMENT_SEND_API1”. Fax number of format AANNNNNNNN(A-area code and N-Fax number) and passing country LAND field in receiver table ‘AU”. When program ran I can see the entry in SCOT under fax for wait

  • Backup: Restoring Purchased Music to another Mac

    How can I use the "pre-defined" Backup Plan "Purchased Music and Videos" to do the following: 1) Backup to CD/DVDs 2) Restore to another Mac in a separate folder so that items can later be dragged into the destination Mac's iTunes Library I have trie

  • Ipod nano or shuffle for gym?? Difference?

    Hey all. I have a 60 gig ipod which I LOVE and use for travel and long trips. BUT i was just about to get a shuffle for the gym...as working out with the 60 gig can get in the way sometimes. Now that the Nano has come out though its a tough one. Whic

  • Installing WLPI 1.2 on WLS 6.0

    How do I install WebLogic Process Integrator 1.2 on WebLogic 6.0? Thanks, Richard

  • Allow movement of split location in before/after comparison

    It would save me a lot of time if the before/after split location was movable. If I'm developing a portrait of two people, the split usually goes right between the two people, so I can't get a good feel for how the after affect "the other" person. Al