Fcron: serialized jobs

My crontab contains several pacman-related jobs which must not get dispatched in parallel, so I use the serial-keyword for them:
# fcrontab
!erroronlymail(true)
@first(2),serial 12h /usr/bin/pacman --noconfirm -Sy
%weekly,serial * * /usr/bin/pacman --noconfirm -Sc
%weekly,serial * * /usr/bin/yaourt -B /mnt/archive/backup/thor/packman
The first job never gets executed, can anyone tell me why?
The man-page says "Fcron runs at most 1 serial jobs [...] simultaneously.". My understanding of this is that all jobs marked as "serial" get classically serialized, i.e. queued and executed one at a time. Maybe my interpretation is wrong and the "serial" keyword does not imply any queueing?
Regards,
lynix

Okay, so I changed my fcrontab to look like this:
@first(5),forcemail(true),erroronlymail(false) 2h /usr/bin/pacman --noconfirm -Sy
And still there is no pacman invocation after 19 minutes of uptime. Fcron is running and there are no error messages in the logs.
The entry looks exactly like the one from the man page saying "run after five minutes of execution the first time, then run every hour".
Any ideas?

Similar Messages

  • CLI framerate and serial jobs on a cluster

    Hello,
    I have couple of questions for compressor (ver 4 )
    1) Is there a way to specify framerate for image sequence using command line interface.
    The compressor help lists the following but does not provide a way how such an option work. I tried few different ways in vain
    -jobpath  -- url to source file.                        -- In case of Image Sequence, URL should be a file URL pointing to directory with image sequence.
    -- Additional parameters may be specified to set frameRate (e.g. frameRate=29.97) and audio file (e.g. audio=/usr/me/myaudiofile.mov).
    2) I have a managed cluster with 8 nodes each running 4 instances. For some reason compressor only uses 6 instances spread across many nodes and not all 32.
    3) Is there a way to specify and process just one job for a given cluster? This is equivalent of serialize option but it does not seem to be available.
    Currently when I submit multiple jobs few of them run at once and create a havok on NFS mounts and fail. I can limit the job queue to one but that is not ideal.
    I would appreciate any pointers.
    Thanks
    --Amit

    Hi Amit, just saw your post. I assume you are passing the settings via the "-settingspath" <my_specific_job_settings_in_a_file_path_somwhere.settings" on the compressor command?
    If so, it's a very simple matter to specify the frame rate etc etc on the VIDEO setting itself, save it and use it.
    I don't recall that such "atomic settings" we're actually available in the v3 of compressor.app. I'll check later for v4. I'd be surprised  if they are. :)
    What I've done in the past is to simply make my own personalised settings (and destinations) up using  the Compressor.app V4 UI (save as ... my name .. i.e.  prores_noaudio_PAL" and path these file paths on the -settingspath" parm on the compressor cli. In your case I'd imagine a simple VIDEO setting for your frame rate and you're set!
    Compressor.app v4 makes any of your own R.Y.O settings on your ~/library/application support/compressor/settings folder. You can copy or move or link these anywhere you like and pass these customised Parms to compressor cli as above.
    I doubt also if these atomic settings are available as AppleScript properties, there's likely no vars like that there me thinks. . I recall the same ones exist as they do for the cli and now in compressor.app 4 they probably support Qmaster .. Yeah.. Compressor.app is apple scriptable.. It's not in the default list library so just add it...
    Lastly as a guess these compressor ".setting" files are probably XML base.. So you might consider to tinker with one with an editor.
    Anyway.. Try then"-settingspath" operand and see how u go.
    2) the way Qmaster schedules across unmanaged transcode nodes is likely based on how busy each node is. You should be able to see if there is a pattern simply by looking on the Qmaster job controller.log .. See share monitor.app info or use console.app look on
    your ~/library/application support/apple Qmaster/logs directory. There will be something in there.
    Also have a look for any errors on the cluster services logs in case the services you expects to be there are actually not.
    Are you using a managed cluster? Personally I have found this quite stable. Make sure u insure that those services are for managed climates only..
    3) yes you can specify a specific cluster on the using the "-clusterid" operand. Should you have more than one managed cluster in the farm, this is a cool way to go. Also considering the "-priority" operand usage as well for hot jobs. Make sure all submissions are low priority... It's batch.. Works great!!
    4) NDS mounts.. Well the simple rule is to keep them mounted on their own subnet, maker sure all destinations and sources are available to all hosts, set compressor options to copy to cluster only when ya must, and make sure the service time for NDS io READ Request is as fast as u can make it. ( jumbo frames, dedicated NICs and sinners and optimum fast read file systems... Keep other traffic away for it.. Works a treat!
    Should you turn something up pleased at it here for others to see. I'm certainly interested.
    Sorry for typos.. Just on MTR with iPhone on way home. :)
    Hth
    Warwick
    Hong Kong

  • Peculiar cron/gsettings issue [SOLVED]

    08/2012 Update: For anyone who finds this looking for a solution to this issue, it seems gnome has changed something and cron now needs the DBUS_SESSION_BUS_ADDRESS environment variable set rather than XAUTHORITY. There are a couple methods to get this, but the one I use is a script that runs on login which just runs "echo $DBUS_SESSION_BUS_ADDRESS > ~/.dbus-session". After that, my wallpaper script just needs to read the file and export it before running gsettings set. A more robust way would be to fetch it from gnome-shell's environment like so:
    cat /proc/$(pgrep -u `whoami` ^gnome-shell$)/environ | grep -z DBUS_SESSION_BUS_ADDRESS
    You can use this in a wallpaper script to output the appropriate value (including DBUS_SESSION_BUS_ADDRESS=) which you can then wrap with $() and export:
    export $(cat /proc/$(pgrep -u `whoami` ^gnome-shell$)/environ | grep -z DBUS_SESSION_BUS_ADDRESS)
    Original post below:
    What I'm trying to do is have cron change my gnome 3 wallpaper every 15 minutes. My crontab looks like this:
    00,15,30,45 * * * * gsettings set org.gnome.desktop.background picture-uri "file://$(find ~/Pictures/Wallpapers -type f | shuf -n1)"
    Now my issue is that when I reboot, this does not function as it should. My wallpaper is not changed, nor is the gsettings key value. However, if I manually restart the cron daemon after I've logged in, it begins to function again. This issue is present with both fcron and dcron. The issue doesn't seem to be that cron is not running the jobs; here's my crond.log including 2 instances of the jobs that did not work as expected:
    Apr 25 13:45:00 localhost fcron[3116]: Job gsettings set org.gnome.desktop.background picture-uri "file://$(find ~/Pictures/Wallpapers -type f | shuf -n1)" started for user xion (pid 3117)
    Apr 25 13:45:03 localhost fcron[3116]: Job gsettings set org.gnome.desktop.background picture-uri "file://$(find ~/Pictures/Wallpapers -type f | shuf -n1)" completed
    Apr 25 13:47:51 localhost fcron[2874]: Job /usr/sbin/run-cron /etc/cron.daily completed (mailing output)
    Apr 25 13:47:51 localhost fcron[2874]: Can't find "/usr/sbin/sendmail". Trying a execlp("sendmail"): No such file or directory
    Apr 25 13:47:51 localhost fcron[2874]: Can't exec /usr/sbin/sendmail: No such file or directory
    Apr 25 13:47:51 localhost fcron[4234]: Job /usr/sbin/run-cron /etc/cron.hourly started for user systab (pid 4235)
    Apr 25 13:47:53 localhost fcron[4234]: Job /usr/sbin/run-cron /etc/cron.hourly completed
    Apr 25 14:00:00 localhost fcron[5157]: Job gsettings set org.gnome.desktop.background picture-uri "file://$(find ~/Pictures/Wallpapers -type f | shuf -n1)" started for user xion (pid 5158)
    Apr 25 14:00:02 localhost fcron[5157]: Job gsettings set org.gnome.desktop.background picture-uri "file://$(find ~/Pictures/Wallpapers -type f | shuf -n1)" completed
    So the job is being run, but it refuses to actually apply the change until I restart the daemon. What could be the cause of this issue?
    Update: Turns out I needed to set DISPLAY and XAUTHORITY for gsettings to be able to set anything. Is there a better way to set these or fetch them from the desktop shell than simply using export in the cron job script?
    Last edited by XionZui (2012-11-01 03:27:00)

    No luck on any front
    Apr 25 15:45:50 localhost fcron[2405]: adding new file xion
    Apr 25 15:46:50 localhost fcron[2405]: updating configuration from /var/spool/fcron
    Apr 25 15:46:50 localhost fcron[2405]: adding new file xion
    Apr 25 16:00:00 localhost fcron[6003]: Job gsettings set org.gnome.desktop.background picture-uri "file://$(find /home/xion/Pictures/Wallpapers -type f | shuf -n1)" started for user xion (pid 6004)
    Apr 25 16:00:02 localhost fcron[6003]: Job gsettings set org.gnome.desktop.background picture-uri "file://$(find /home/xion/Pictures/Wallpapers -type f | shuf -n1)" completed
    Apr 25 16:01:00 localhost fcron[6018]: Job /usr/sbin/run-cron /etc/cron.hourly started for user systab (pid 6019)
    Apr 25 16:01:02 localhost fcron[6018]: Job /usr/sbin/run-cron /etc/cron.hourly completed
    Apr 25 16:03:41 localhost fcrontab[6032]: fcrontab : editing xion's fcrontab
    Apr 25 16:04:50 localhost fcron[2405]: updating configuration from /var/spool/fcron
    Apr 25 16:04:50 localhost fcron[2405]: adding new file xion
    Apr 25 16:15:00 localhost fcron[6072]: Job /home/xion/Scripting/gnome-desktop.sh started for user xion (pid 6073)
    Apr 25 16:15:02 localhost fcron[6072]: Job /home/xion/Scripting/gnome-desktop.sh completed
    Could it be related to read permissions on my home folder or gsettings not modifying the correct user settings? Is there any way to see the output of the commands when they're run? What could possibly be changed by restarting the cron daemon with sudo?
    Edit: Alright, I've narrowed it down a little bit. It's successfully running the script. gsettings get works fine, the find and shuf commands work fine and output a string in the appropriate format, and I can have it write to a text file. The only thing that won't work for whatever reason is gsettings set, and it doesn't give any errors (or any output at all) when it runs. I've tried it with multiple keys and simple integer values, and it simply won't change anything. Is there possibly an environment variable that needs to be set for it to work properly which isn't set when cron is started at boot?
    Last edited by XionZui (2011-04-26 00:50:36)

  • Need help in query to display lot and serial numbers for a Wip Job Component

    Hi ALL,
    I have a requirement as below.
    I need to display lot and serial numbers for a Wip Job Component, i have a xml report in that for each wip job component i need to check whether it is a lot control item or serial control item based on that i need to display some data. so can you please help me to get the query to indentify the lot and serial number for a wip job component.
    Thanks

    Thank you for replying Gordon.  I did not remember I had asked this before.  I no longer have access to the other account. 
    What I need on the query is that I want  a list of items with the on order quantity and when we are expecting this order to be received.  The receiving date is based on the PO delivery date.  The trick here is that  I need to Master sku to show the delivery date of the component sku.  In my scenario all components have the same delivery date for the Master sku.  If there are mulitple delivery dates because each warehouse is getting them on a different date then I want to get the delivery date for that warehouse.
    Let me know if this is possible and if yes please guide towards a query for it.
    Thank you.

  • Serial control in wip discrete jobs

    Hi gurus,
    i had created a discrete job with serial control enabled for components.
    but when trying to query for job im unable to get it.
    so whats the mistake i did?
    when going thru oracle white papers they mentioned to change profile option
    TP:WIP Operation backflush setup to online processing mode.(for having serial control in job)
    but i cant able to get the profile option in wip responsibilty.
    will u pls guide what are mandatory steps for building discrete jobs with serial control and to perfor move transactions?
    waiting for solution pls.

    Dear Mr.Riccardo
    I would like to bring to your notice a document from Metalink. It says that we cannot do move transaction of Serial controlled Items in Desktop forms.....
    Subject: Is Serial Number Association Functionality Only Available Using Supply Chain Mobile Applications?
    Doc ID: Note:265512.1 Type: HOWTO
    Last Revision Date: 01-APR-2004 Status: PUBLISHED
    The information in this article applies to:
    Oracle Work in Process - Version: 11.5.9
    This problem can occur on any platform.
    Goal
    Is the new functionality of Serial Number Association only available through Mobile Applications?
    Fix
    This functionality provides support for standard discrete jobs with or without a routing.
    The User Interface for this functionality is via Mobile Applications
    Serialization can be activated within a job...this can be for entire job or beginning at
    a specified operation.This new feature is optional. It is defaulted by org parameter and controlled at the job level. The user will need to define the default to steps for the moves.
    When turned on, then when Jobs are created automatically ...via CTO or Planning, the serial will be turned on.
    The current functionality of serial number generation and uniqueness is used. Auto-generation and association at the time of job creation via Mass Load is added with this functionality. Also import of serials from 3rd party systems is supported.
    The serial association can occur anytime during or after job creation. Users can use pre-generated or on-the-fly serial numbers. This association is also not tied to job quantity and is changeable until assigned to a specific unit.
    Once the job is 'Serialized', you cannot transact this job through any of the Desktop forms. A new Menu structure is added in the Mobile Applications called 'Serialized Assembly and Material Transaction'.
    Users utilize the mobile pages under this menu to perform serialized move or completiontransactions.
    These assembly items for which the serial numbers are associated at the time of creating the wip job itself can be transacted only through Mobile Supply Chain Applications (MSCA) .

  • Third Party replaced Motherboard under warrantee, did a great job. Serial Number issue

    IdeaCentre B500 Serial Number is reflected in bios as not matching since the MB changed today. Is that going to cause any issues with future warrantee work (which I don’t expect) and will it effect any other functions such as recovery is needed

    No, not a refurbished phone. Brand new in the packaging from Vodafone. I am currently researching the alleged "third party hardware" and believe it is the dock connector that Apple claim they didn't put there. These retail at about £4.99. Believe me, if I had had a third party repair of a £4.99 piece of kit I would just do it again not take it to Apple to try and fix. There is no logic to it. I'm not going to go down the cynical route of the fact that this "intruder discovery" was made whilst the phone was out of my sight being worked on by their repair staff!?!?
    I have also discovered there is no one to email at Apple to complain. The only suggestions appear to be phoning or the Genius Bar. Obviously, I've been to the Genius Bar to be told that they won't do anything because of this mysterious intruder in their phone.
    Further, I am now in a position where I cannot dispute that the phone has been opened as it now has; by Apple. The staff admitted that before they opened it "it didn't appear to have been opened before" but they stuck by their "we won't do anything" stance upon their discovery.
    I've got a photo of the inside of the phone and to my untrained eye everything appears immaculate. None of of the screw heads, numerous of which would have to be removed and replaced for the "intruder" to get in there, are marked in anyway. Can't see why anyone would go to the trouble.
    The phone isn't even under warranty so it's not like I'm trying to get a new for old - I just wanted it fixed by the "experts".
    Rest assured, I will get to the bottom of this.

  • Identify a job history through SID and Serial Number

    Hey,
    I ran the following select statement on the DB
    SELECT
    a.USERNAME, a.serial#, a.sid, a.STATUS, b.sql_text, b.SQL_ID
    FROM V$SESSION a INNER JOIN V$SQLAREA b
    ON a.SQL_ADDRESS= b.ADDRESS;
    and there's one select statement with Serial# and SID 52596,141 that has been running for a while now.
    Any idea how I can tell how long it has been running?
    Oracle Rac 10G running on Linux

    Show all connected users
    set lines 100 pages 999
    col ID format a15
    select username
    ,      sid || ',' || serial# "ID"
    ,      status
    ,      last_call_et "Last Activity"
    from   v$session
    where  username is not null
    order by status desc
    ,        last_call_et desc
    /Display any long operations
    set lines 100 pages 999
    col username format a15
    col message format a40
    col remaining format 9999
    select     username
    ,     to_char(start_time, 'hh24:mi:ss dd/mm/yy') started
    ,     time_remaining remaining
    ,     message
    from     v$session_longops
    where     time_remaining = 0
    order by time_remaining desc
    /refer: http://www.shutdownabort.com/dbaqueries/Administration_Session.php

  • How To Handle With Back Ground JOB From WEBUI When Click On "Appove"

    Hi
    How To Scheduled A Job Through ABAP Report In back end  Of CRM when i click on "Approve" Button in WEBUI  From result list.
    As per My requirement I have a Search View and Result View
    In Search View I have  Below Fields
    ITC Vendor ID    
    Claim Status
    User status (date status changed)
    Model
    Serial Number
    Date completed of Service Completion
    Based on Search Criteria I will get Result In Result View.(Suppose 10 Records I got In Result View)
    In the Result View I need to Add one Button As "Approve"
    When i Click On Approve button One Pop up Message Need to Open And In that popup window I need to Display Below Text
    "Approve  Claim Job Has Started In Background  
    Note: Only Claims Which are in Submitted  Status  Will be  Approved. you May Close This Window"
    In SAP CRM System  Back Ground Job Need To Start When Click On "Approve" Button In WEBUI .
    In the Back Ground ABAP Report which will validate based on Result List Records"
    In the Result List we may have all types of Claims which are status in "Submitted" "Pending" "Rejected" "Approve".
    I need to collect all records from Result list and validate Those Records who's Status in "Submitted
    1)Sort all the claims based on ITC Vendor ID.
    2)Grouped all the submitted claims against each ITC Vendor ID from the search result
    3)Change the status of the selected submitted claims to Approved.
    4)Displays information messages as mentioned whenever a claim is approved, the same message will be captured in the job log.
    ‘Claims <ClaimID 1>,…<ClaimID N> now approved for ITC Vendor ID’.
    5)Sending Email to each IRC.
    6)Capture all the approved claims in the below format (Format Attached "Screen Shot Attachment")
    7)Store the file in the Application Server AL11 in .csv format
    Please Find Attachement For Reference.
    1)ITC Claim Screen Shot
    2)Screen Shot For Attachment
    Thanks
    Raj

    Hi,
    You can add the following code in on approve method to show popup to the user,
    IF req_edit IS NOT BOUND. " gloabl attribute in impl class of the view
        REFRESH lt_buttons.
        lss_button-id  = 'btnyes'.
        lss_button-text = 'YES'.
        lss_button-on_click = 'YES'.
        APPEND lss_button TO lt_buttons.
        CLEAR lss_button.
        lss_button-id  = 'btnno'.
        lss_button-text = 'NO'.
        lss_button-on_click = 'NO'.
        APPEND lss_button TO lt_buttons.
        CLEAR lss_button.
        CALL METHOD comp_controller->window_manager->create_popup_2_confirm
          EXPORTING
            iv_title          = 'ATTENTION'
            iv_text           = 'Are you sure you want to edit this document?'
            iv_btncombination = '99'
            iv_custombuttons  = lt_buttons
          RECEIVING
            rv_result         = req_edit.
        req_edit->set_on_close_event( iv_event_name = 'EDIT' iv_view = me ). "#EC NOTEXT
        req_edit->open( ).
        RETURN.
      ELSE.
        lr_node ?= req_edit->get_context_node( 'OUTPUTNODE' ).
        lv_outbound = lr_node->get_event_name( ).
    *  CLEAR ptc_pricing_status.
    *    lv_outbound = req_edit->get_fired_outbound_plug( ).
        IF lv_outbound = 'YES'.
    you can use the submit report code here and you can al the validations here
        ELSE. " No
    if user clicks no nothing to do..
        ENDIF.
        CLEAR req_edit.
      ENDIF.
    Best Regards,
    Dharmakasi.

  • How can you get your ipod touch serial number from apple beacause my ipod was stolen

    how can i get my ipod serial number from apple

    http://support.apple.com/kb/HT2526?viewlocale=en_US
    Basic troubleshooting steps  
    17" 2.2GHz i7 Quad-Core MacBook Pro  8G RAM  750G HD + OCZ Vertex 3 SSD Boot HD 
    Got problems with your Apple iDevice-like iPhone, iPad or iPod touch? Try Troubleshooting 101
     In Memory of Steve Jobs 

  • Adobe Store takes too long to deliver download and serial information

    Earlier this week we received some Indesign files for a job, but they were in Indesign 5 format, so we were unable to open them with Indesign CS4. So, taking this as a sign that it was time to upgrade, yesterday, at around 1pm UK time I purchased the upgrade from Creative Suite Design Premium 4 to Creative Suite Design Premium 5.5. I logged onto the Adobe.com store, looked at the options - Home and Home Business, Small and Medium Business, Eductation and selected Small and Medium Business (because that is what we are). I purchased the upgrade (as a download), received a confirmation email from Adobe and waited....and waited...and waited. At the end of the day I called Adobe Custopmer support and asked why I have not had any email showing links to download sites, and serial numbers. I was told it can take 24 hrs to process the order.
    Well here we are over 24 hrs later and I have still not had any information from Adobe. I tried calling the customer support and it is only open between 9am and 5pm Monday to Friday. The customer support website is down. The Adobe Volume Licensing site is down - and was all day yesterday as well.
    In the meantime I have no way of opening the files. My deadline is Wednesday and we had planned to work on the job today and tomorrow (which is why I am sitting in my office now). It appears I cannot access customer support before Monday morning, and the customer support website is down until tomorrow evening UK time - at least. This means working all night on Monday and probably Tuesday - assuming we get access to the download information early Monday.
    Adobe is one of the biggest software vendors in the world. When I buy software - online and for download, with a credit card - I expect to be able to access the software immediately. There is no excuse for this delay. None. A business the size of Adobe should not have customer services down for 2 days. Support should run 24/7, 365 days a year. My enquiry yesterday was routed to an offshore call centre, so having a 9 to 5 cut off is simply ridiculous.
    The simple fact is that even the smallest software vendors we buy from deliver the goods within minutes. As you can tell I am mightily "annoyed" by this. At this point in time I am wishing these files were in Quark Xpress format (we have both). Quark used to have terrible support but they are now very good - and available.
    As a customer of Adobe for over 15 years I am disappointed. Today, in my email inbox I have emails from Adobe about new online cloud services. Well, why would I want to invest any more in Adobe software if I cannot even access the ones I have bought - quickly?
    If companies like Adobe want to know why piracy is rife - look at your own delivery systems. The only option I have now to do this job is to either download a pirated version (which I will not) or to download and run the trial for Indesign.  How stupid is that?
    Apologies for the rant.

    ProDesignTools wrote:
    Kevin Quigley wrote:
    ... selected Small and Medium Business (because that is what we are). I purchased the upgrade (as a download), ... 
    Well here we are over 24 hrs later and I have still not had any information from Adobe. ...
    In the meantime I have no way of opening the files. My deadline is Wednesday ...
    Hi Kevin, not sure why it's taken longer than usual, but you can just download the free trial in the meantime.  It will work 100% for 30 days, and you can convert it over to the full version once you receive your permanent serial number without reinstalling.  This is fine even with a business license.
    There's really no reason not to proceed in this fashion, especially with an imminent deadline.  Adobe recommends this in cases where there is any sort of delay (for example, student validation) or immediate need.  And absolutely you should avoid downloading the software from anywhere else, which is not only illegal but also hazardous to the health of your computer (malware).
    Hope this helps!
    Acrobat has never had a Trial for Mac. Probably never will.

  • I am trying to Re-install CS5 and when going to use Photoshop, I have my original Serial Number and it will not let me activate the Software on a new hard drive.

    Why is Adobe making us jump so many hoops just to re-install software?
    I had a hardware crash and had to re-install all my programs.   We can not activate Photoshop CS online or by phone!  What gives?
    I am in the middle of a project and I can not use Photoshop CS
    Please advise.

    So let me get this right..... I Pay for Creative Suite 5.   I have my original discs and serial number.  I still have the entire box it came in.   I simply need to re-install the product.   I can not use Photoshop because it can not be activated via phone or online.  I am supposed to just in a chat page and you are put in a cue with "We are still assisting other customers, thank you for your patience. You can also try ourcommunity forums, where experts are online 24/7." 
    IS ADOBE FOR REAL???   THEY EXPECT YOU TO SIT IN A CUE ON HOLD WITHOUT AN INDICATION HOW LONG YOU WILL BE IN CUE IN THE MIDDLE OF THE WORK DAY.
    NOW YOU KNOW WHY PEOPLE HACK THEIR SOFTWARE!   
    I HAVE A JOB TO DO AND CAN NOT TELL A CLIENT, MY PHOTOSHOP DOES NOT WORK, I AM WAITING IN ADOBE'S HELP CUE!

  • How to acquire serial data on a digital input line with good performance?

    Hello,
    we have a performance problem with our realtime controller. Our objective is, to read a 24-bit long digital waveform from a digital input line. To do this, we supply a clock signal (236 KHz)  to the PFI1 Line of our DAQ Board. On each rising edge of the clock, a new bit is set on the digital input DI0.
    Our hardware which transmits the data, is triggered through a digital output from our realtime controller. On each edge on this output, the hardware starts a serial transmission of 24-Bits.
    Everything works fine except the bad performance of our realtime controller. We want to acquire the 24-Bits in a 1ms timed loop. To measure performance we wrote a test program. In that, we only triggered the hardware and transfered the data to the realtime controller. The task which is doing this job, has a approx. CPU-Load of 30%, which is, in my humble opinion, very high. The task is not waiting for data or anything else! We have earlier implemented a control which is also using a 1ms timed loop. This control is sampling 2 analog input signals, 2 counters. Futhermore it's sending telegrams with CAN and doing many calculations. The strange thing is, this much huger program has a CPU-Load of 25%. Does anybody know where the problem is?
    For the better understandig I attached our test-program to this text.
    We're using:
    PXI-8175 realtime controler
    PXI-6221 Multifunction DAQ
    Thanks!
    Regards, 
       Crest!
    Attachments:
    dig_test.zip ‏51 KB

    Hello,
    First of all 30% CPU-Load is normal because the DAQmx-driver needs a lot of resources.
    In your programm you should place a wait (for example with 1ms) into the while-loop which causes
    lower CPU-Load.
    If this is not enough you should build your vi like in the following example.
    Regards,
    Christian
    Attachments:
    Read Dig Port.vi ‏51 KB

  • External Operation Job Log

    Hello all,
    I can't seem to find additional information on this. I have written an exteranl operation in c# and I can run it from the portal without any issue. I know that standard error is supposed to be captured in the job log, however I can't seem to figure out how to write to standard error from a c# console application. Console.error doesn't seem to cut it.
    I am using ALUI 6.0 sp1, but I dont' think this has changed much over the years. Has anybody written a c# external operation and been able to write to the job log?
    Thanks,
    Berney

    You could just make it write to stdout and then redirect stdout to stderr like this: "dir 1>&2". I put that in a bat file in the scripts directory and then made the bat an external operation. It wrote to the job logs like this:
    Mar 3, 2009 11:40:55 AM- Starting to run operations (1 total) for job 'stdout dir test ext op Job'. Will stop on errors.
    Mar 3, 2009 11:40:55 AM- *** Job Operation #1 of 1: ExternalOperation [Run as owner 'Administrator']
    Mar 3, 2009 11:40:55 AM- Java Version: 1.5.0_12 from BEA Systems, Inc.
    Mar 3, 2009 11:40:55 AM- OS x86 Windows 2003 5.2 as SYSTEM
    Mar 3, 2009 11:40:55 AM- External Operation Agent for stdout dir test External Operation is starting...
    Mar 3, 2009 11:40:55 AM- Original Script is: "test.bat"
    Mar 3, 2009 11:40:55 AM- Scripts home is: E:\bea\alui\ptportal\10.3.0\scripts
    Mar 3, 2009 11:40:55 AM- Appending E:\bea\alui\ptportal\10.3.0\scripts to test.bat.
    Mar 3, 2009 11:40:55 AM- Full Path to script is: E:\bea\alui\ptportal\10.3.0\scripts\test.bat.
    Mar 3, 2009 11:40:55 AM- Operation has timeout of 0 ms.
    Mar 3, 2009 11:40:55 AM- stdout>
    Mar 3, 2009 11:40:55 AM- stdout>E:\bea\alui\ptportal\10.3.0\scripts>dir 1>&2
    Mar 3, 2009 11:40:55 AM- stderr> Volume in drive E is New Volume
    Mar 3, 2009 11:40:55 AM- stderr> Volume Serial Number is 4CFC-EB07
    Mar 3, 2009 11:40:55 AM- stderr>
    Mar 3, 2009 11:40:55 AM- stderr> Directory of E:\bea\alui\ptportal\10.3.0\scripts
    Mar 3, 2009 11:40:55 AM- stderr>
    Mar 3, 2009 11:40:55 AM- stderr>03/03/2009 11:37 AM <DIR> .
    Mar 3, 2009 11:40:55 AM- stderr>03/03/2009 11:37 AM <DIR> ..
    Mar 3, 2009 11:40:55 AM- stderr>12/18/2008 07:59 AM 5,049 AnalyticsRunJobs.bat
    Mar 3, 2009 11:40:55 AM- stderr>10/08/2008 01:35 PM 2,034 BulkSubscriber.bat
    Mar 3, 2009 11:40:55 AM- stderr>10/08/2008 01:35 PM 3,577 SavedSearchMailer.bat
    Mar 3, 2009 11:40:55 AM- Total Memory = 33554432 bytes, Free Memory = 9573312 bytes, Used Memory = 23981120 bytes.
    Mar 3, 2009 11:40:55 AM- stderr>03/03/2009 11:37 AM 8 test.bat
    Mar 3, 2009 11:40:55 AM- stderr> 4 File(s) 10,668 bytes
    Mar 3, 2009 11:40:55 AM- stderr> 2 Dir(s) 17,469,964,288 bytes free
    Mar 3, 2009 11:40:55 AM- *** Job Operation #1 completed: Process completed successfully; Command line: ""test.bat"".(282609)
    Mar 3, 2009 11:40:55 AM- Done with job operations.
    I don't think much has changed with this in the portal 6.x world, but it looks like stderr is definitely captured in wci 10.3. If there's a problem with an older version, start a support ticket to investigate a possible bug.

  • In LO Cockpit, Job contol job is cancelled in SM37

    Dear All
       I am facing one problem. Pl help me to resolve that issue.
    When ever I am scheduling the delta job for 03 (Application Component), that is cancelled in SM37. Hence, I couldn't retrive those data from queued delta MCEX03 (smq1 or lbwq) to RSA7 because of job is cancelled. When I have seen that job log I got below Runtime Errors. Please hep me to resolve this issue
    Runtime Errors         MESSAGE_TYPE_X
    Date and Time          04.10.2007 23:46:22
    Short text
    The current application triggered a termination with a short dump.
    What happened?
    The current application program detected a situation which really
    should not occur. Therefore, a termination with a short dump was
    triggered on purpose by the key word MESSAGE (type X).
    What can you do?
    Note down which actions and inputs caused the error.
    To process the problem further, contact you SAP system
    administrator.
    Using Transaction ST22 for ABAP Dump Analysis, you can look
    at and manage termination messages, and you can also
    keep them for a long time.
    Error analysis
    Short text of error message:
    Structures have changed (sy-subrc=2)
    Long text of error message:
    Technical information about the message:
    Message class....... "MCEX"
    Number.............. 194
    Variable 1.......... 2
    Variable 2.......... " "
    Variable 3.......... " "
    Variable 4.......... " "
    How to correct the error
    Probably the only way to eliminate the error is to correct the program.
    If the error occures in a non-modified SAP program, you may be able to
    find an interim solution in an SAP Note.
    If you have access to SAP Notes, carry out a search with the following
    keywords:
    "MESSAGE_TYPE_X" " "
    "SAPLMCEX" or "LMCEXU02"
    "MCEX_UPDATE_03"
    If you cannot solve the problem yourself and want to send an error
    notification to SAP, include the following information:
    1. The description of the current problem (short dump)
    To save the description, choose "System->List->Save->Local File
    (Unconverted)".
    2. Corresponding system log
    Display the system log by calling transaction SM21.
    Restrict the time interval to 10 minutes before and five minutes
    after the short dump. Then choose "System->List->Save->Local File
    (Unconverted)".
    3. If the problem occurs in a problem of your own or a modified SAP
    program: The source code of the program
    In the editor, choose "Utilities->More
    Utilities->Upload/Download->Download".
    4. Details about the conditions under which the error occurred or which
    actions and input led to the error.
    Thanks in advance
    Raja

    LO EXTRACTION:
    First Activate the Data Source from the Business Content using “LBWE”
    For Customizing the Extract Structure – “LBWE”
    Maintaining the Extract Structure
    Generating the Data Source
    Once the Data Source is generated do necessary setting for
    Selection
    Hide
    Inversion
    Field Only Know in Exit
    And the save the Data Source
    Activate the Data Source
    Using “RSA6” transport the Data Source
    Replicate the Data Source in SAP BW and Assign it to Info source and Activate
    Running the Statistical setup to fill
    the data into Set Up Tables
    Go to “SBIW” and follow the path
    We can cross check using “RSA3”
    Go Back to SAP BW and Create the Info package and run the Initial Load
    Once the “Initial delta” is successful before running “delta” load we need to set up “V3 Job” in SAP R/3 using “LBWE”.
    Once the Delta is activated in SAP R/3 we can start running “Delta” loads in SAP BW.
    Direct Delta:- In case of Direct delta LUW’s are directly posted to Delta Queue (RSA7) and we extract the LUW’s from Delta Queue to SAP BW by running Delta Loads. If we use Direct Delta it degrades the OLTP system performance because when LUW’s are directly posted to Delta Queue (RSA7) the application is kept waiting until all the enhancement code is executed.
    Queued Delta: - In case of Queued Delta LUW’s are posted to Extractor queue (LBWQ), by scheduling the V3 job we move the documents from Extractor queue (LBWQ) to Delta Queue (RSA7) and we extract the LUW’s from Delta Queue to SAP BW by running Delta Loads. Queued Delta is recommended by SAP it maintain the Extractor Log which us to handle the LUW’s, which are missed.
    V3 -> Asynchronous Background Update Method – Here by seeing name itself we can understand this. I.e. it is Asynchronous Update Method with background job.
    Update Methods,
    a.1: (Serialized) V3 Update
    b. Direct Delta
    c. Queued Delta
    d. Un-serialized V3 Update
    Note: Before PI Release 2002.1 the only update method available was V3 Update. As of PI 2002.1 three new update methods are available because the V3 update could lead to inconsistencies under certain circumstances. As of PI 2003.1 the old V3 update will not be supported anymore.
    a. Update methods: (serialized) V3
    • Transaction data is collected in the R/3 update tables
    • Data in the update tables is transferred through a periodic update process to BW Delta queue
    • Delta loads from BW retrieve the data from this BW Delta queue
    Transaction postings lead to:
    1. Records in transaction tables and in update tables
    2. A periodically scheduled job transfers these postings into the BW delta queue
    3. This BW Delta queue is read when a delta load is executed.
    Issues:
    • Even though it says serialized , Correct sequence of extraction data cannot be guaranteed
    • V2 Update errors can lead to V3 updates never to be processed
    Update methods: direct delta
    • Each document posting is directly transferred into the BW delta queue
    • Each document posting with delta extraction leads to exactly one LUW in the respective BW delta queues
    Transaction postings lead to:
    1. Records in transaction tables and in update tables
    2. A periodically scheduled job transfers these postings into the BW delta queue
    3. This BW Delta queue is read when a delta load is executed.
    Pros:
    • Extraction is independent of V2 update
    • Less monitoring overhead of update data or extraction queue
    Cons:
    • Not suitable for environments with high number of document changes
    • Setup and delta initialization have to be executed successfully before document postings are resumed
    • V1 is more heavily burdened
    Update methods: queued delta
    • Extraction data is collected for the affected application in an extraction queue
    • Collective run as usual for transferring data into the BW delta queue
    Transaction postings lead to:
    1. Records in transaction tables and in extraction queue
    2. A periodically scheduled job transfers these postings into the BW delta queue
    3. This BW Delta queue is read when a delta load is executed.
    Pros:
    • Extraction is independent of V2 update
    • Suitable for environments with high number of document changes
    • Writing to extraction queue is within V1-update: this ensures correct serialization
    • Downtime is reduced to running the setup
    Cons:
    • V1 is more heavily burdened compared to V3
    • Administrative overhead of extraction queue
    Update methods: Un-serialized V3
    • Extraction data for written as before into the update tables with a V3 update module
    • V3 collective run transfers the data to BW Delta queue
    • In contrast to serialized V3, the data in the updating collective run is without regard to sequence from the update tables
    Transaction postings lead to:
    1. Records in transaction tables and in update tables
    2. A periodically scheduled job transfers these postings into the BW delta queue
    3.This BW Delta queue is read when a delta load is executed.
    Issues:
    • Only suitable for data target design for which correct sequence of changes is not important e.g. Material Movements
    • V2 update has to be successful
    Direct Delta: With this update mode, the extraction data is transferred with each document posting directly into the BW delta queue. In doing so, each document posting with delta extraction is posted for exactly one LUW in the respective BW delta queues.
    Queued Delta: With this update mode, the extraction data is collected for the affected application instead of being collected in an extraction queue, and can be transferred as usual with the V3 update by means of an updating collective run into the BW delta queue. In doing so, up to 10000 delta extractions of documents for an LUW are compressed for each Data Source into the BW delta queue, depending on the application.
    Non-serialized V3 Update: With this update mode, the extraction data for the application considered is written as before into the update tables with the help of a V3 update module. They are kept there as long as the data is selected through an updating collective run and are processed. However, in contrast to the current default settings (serialized V3 update), the data in the updating collective run are thereby read without regard to sequence from the update tables and are transferred to the BW delta queue.
    V1 - Synchronous update
    V2 - Asynchronous update
    V3 - Batch asynchronous update
    These are different work processes on the application server that takes the update LUW (which may have various DB manipulation SQLs) from the running program and execute it. These are separated to optimize transaction processing capabilities.
    Taking an example -
    If you create/change a purchase order (me21n/me22n), when you press 'SAVE' and see a success message (PO.... changed...), the update to underlying tables EKKO/EKPO has happened (before you saw the message). This update was executed in the V1 work process.
    There are some statistics collecting tables in the system which can capture data for reporting. For example, LIS table S012 stores purchasing data (it is the same data as EKKO/EKPO stored redundantly, but in a different structure to optimize reporting). Now, these tables are updated with the txn you just posted, in a V2 process. Depending on system load, this may happen a few seconds later (after you saw the success message). You can see V1/V2/V3 queues in SM12 or SM13.
    V3 is specifically for BW extraction. The update LUW for these is sent to V3 but is not executed immediately. You have to schedule a job (e.g. in LBWE definitions) to process these. This is again to optimize performance.
    V2 and V3 are separated from V1 as these are not as real time critical (updating statistical data). If all these updates were put together in one LUW, system performance (concurrency, locking etc) would be impacted.
    Serialized V3 update is called after V2 has happened (this is how the code running these updates is written) so if you have both V2 and V3 updates from a txn, if V2 fails or is waiting, V3 will not happen yet.
    hope it will helps you.....

  • DTP Background job in sm37?

    Hi Gurus,
    When I see the dtp status from the monitoring screen of dtp thru Job Overview, the below screen comes.
                                                               Status              then  other tabs for timings and other.....
    BI_Process_dtp_load                         Finished 
    BIDTPR_123456_1                             Active
    BIDTPR_123456_1                             Active
    BIDTPR_123456_1                             Active
    BIDTPR_123456_1                             Active
    BIDTPR_123456_1                            Finished
    What these middle jobs are for and why then it always shows Finished in last BIDTPR_123456_1 status.
    And what does the RELEASE JOB means in the same screen
    Thanks,
    SDPSDN
    Edited by: SDPSAP on Feb 29, 2012 5:09 PM
    Edited by: SDPSAP on Feb 29, 2012 5:10 PM

    Hi SDNSAP
    For example you have triggered a DTP which is having parallel processing as 10 with serial and immediate parallel processing as processing mode, then you can see maximum of 9 data packets are running.
    All these 9 data packets will with job BIDTPR_1236023_1 individually, i mean that each packet will have one job with BIDTPR_1236023_1. whenever few packets are finished their respective job status will be finished.
    Thanks,
    Prashanth

Maybe you are looking for