NiRFSG_CheckGenerationStatus fails periodically

Hello:
I keep observing this strange effect once in a while with my 3 channel PXI-5673 system. Just to make it clear - I have yet to find a way to reproduce this problem.
NI-RFSG 1.6.3.
There are three generators, daisy chained. I set them to generate the same file continuously for 1000 repetitions. Then after some delay I check the completion status with niRFSG_CheckGenerationStatus on all channels. In a normal case I would get 1,1,1 - i.e. all generators finished generation. However, sometimes the instrument returns 1,0,1 - i.e. the second generator didn't finish generating even though the other two did. The status of the generation flag returned by niRFSG_CheckGenerationStatus never changes. Also, niRFSG_CheckGenerationStatus never returns an error.
Any suggestions on what might be causing this error?
Thank you,
-Ilya.
Attachments:
Capture_RFGEN_2chan_complete_tmo.spy ‏946 KB

Hi Dan:
It happents perhaps every 50 times. Sometimes several times in the row.
In my code I have a loop that repeatedly generates the same waveforms on all three generator boards at different power levels. I have the same delay after which the code will start checking the completetion status.
As a result, I have a reason to believe that the actual generation is done and that the driver doesn't return the completion status for all cards. The completion status is '1' for two of the three instruments. It is '0' for just one of them.
Let me know if you need more information.
-Ilya.

Similar Messages

  • About 2 scream!!! NOT SWITCHABLE GRAPHICS PROBLEM!!! Vid card seems 2 FAIL PERIOD!!!

    I have the dv7-6135dx with the intel/radeon 6490 blah blah blah. You guys know the machine. Every time i try to find information regarding the bleeding ati card, all i can find is switchable graphics this switchable that, bios update for switcha....
    When i disabled the dynamic in bios long ago, i lost ability to control screen brightness. Was very dim. Haven't tried it again.
    When i plug in a monitor to HDMI port=nothing. Vga port works fine but i can not watch any video greater than 1.2 Gb without super skipping and digital breakup of the video (the million tiny boxes thing).
    From ati site, auto thingy detected my card, gave me best driver, installed and when i open catalyst, it always says can not start because no ati device.
    Before? I could watch 18gb hd vids with stunning performance. Now? Its dvd rips or choke and die.
    Can SOMEBODY.... ANYBODY tell me what if any troubleshooting there is to see if the card is shot, or if there is a fix please? One that does NOT involve some gahrbajje about bleeding switchable bloody graphics?
    Thanks in advance for any and all help in THIS matter.

    Hey,
    i don't know if this works, but for my TimelineX laptop it works:
    https://wiki.archlinux.org/index.php/Ac … e_Graphics
    I don't know if there is an "official" version, so I can just provide you with the link above. This script stops my radeon card when I boot. I only have as kernel parameter i915.modeset=1
    and my intel card is always on.

  • Magic Mouse continually fails to left click, and keyboard problems

    Since my Yosemite 10.10.1 update, my mouse, and keyboard keep failing periodically. After a nights sleep, when I try to open a program, by left clicking, it will occasionally only right click. Being able to left click, is very crucial, even more then right clicking, especially when trying to enter a website into a url, or when selecting a bookmark. Anyway I will restart my computer, and when I then try to enter my password, certain keys will then fail, and respond wt a low ping. It usually requires at least 5 to 10 minutes of a complete shutdown, until the keyboard and mouse work again. I keep fearing that my keyboard will fail, while entering this message. It occurs so very randomly, but has been occurring more often as time goes by. I worried that it might occur, before I'm done typing this post. Does anyone have any idea, what the problem could be, and have any remedy reccommendations?

    Nubz,
    actually I'm still using a standard wired apple keyboard, wt the magic mouse, because the wire goes under my desk, and I was actually considering getting a trackpad. Still since my Yosemite upgrade, I'm having problem after problem, so I'm not sure I will even stay wt Mac. Honestly I've examined any and all solutions, in that support article, but none of them were relevant to my problem.
    I did check & repair my Yosemite Disk Permissions using Disk Utility, and although it found only a few things wrong, and none were related to my keyboard, or mouse (most were for my HP printer), repairs seemed to rectify the problem.

  • XP mode looses network connectivity periodically on multiple Dell Optiplex Windows 7 Professional PCs.

    XP mode looses connectivity periodically throughout the day on a daily basis but Windows 7 Pro host PC never looses connectivity.  Problem is at its worst when the XP mode has been running all day.  If XP mode is rebooted once daily, problem is
    reduced but not gone.  Some days are better than others.  One day it will happen to one xp mode and not another.  Another day one xp mode will fail repeatedly while another works OK.  There is no rhythm to the failures.   Problem
    began with the 1st XP Mode environment. 
    I can disable and enable the virtual NIC and regain connectivity.
    The problem occurs during batch processing of DOS based payroll software using many open files and handles.  Software runs from XP Mode as shortcut to Mapped Server Drive letter.
    Environment is small.
    1 windows 2008 R2 Server as Domain/File server. Dell T420 Server
    3 Windows 7 Pro with XP mode.  All XP Mode fail periodically.  All Dell PCs, one OptiPlex 3010, one OptiPlex 390, 1 Dell Dimension.
    2 Windows XP Pro, OptiPlex 330. Real mode never looses connectivity.
    All drivers up to date.
    Router firmware up to date.
    Power management disabled. 
    I disabled the Servers Auto Disconnect.
    XP mode Network settings using physical adapter, not NAT.
    I've been reading forums and troubleshooting this for months.  I've tried everything except recreate the VHD.

    #1 Do you have any supporting articles you can point me to?
    #2 What do you mean by "Temperature"?  Physical heat?  The offices are kept at approx 75°.
    #3  Will look at them again.  As I said, I've been troubleshooting this for quite a while.  I don't recall any Eventviewer errors related to this but I will check again.
    Thanks again for your input.
    DouglasOfSanMarcos

  • [solved] migrating to systemd

    Hey All,
    I have hit two stumbling blocks migrating to systemd
    one is rc.local support
    rc.local simply contains
    echo 0 > /sys/class/backlight/intel_backlight/brightness
      which needs to be run as root once per startup to enable backlight adjustment
    I wrote the file /etc/systemd/system/rc-local.service
    [Unit]
    Description=/etc/rc.local Compatibility
    [Service]
    Type=oneshot
    ExecStart=/etc/rc.local
    TimeoutSec=0
    #StandardInput=tty
    #RemainAfterExit=yes
    [Install]
    WantedBy=multi-user.target
    and executed it with
    # systemctl start rc-local.service
    and this works correctly
    running
    systemctl enable rc-local.service
    and restarting however still results in rc.local failing
    The second issue is that slim.service fails periodically (about 50% of the time) and I am not sure why, /var/lock exists
    Does anyone have any ideas as to why slim fails. Or a workaround for either removing rc.local completely or a fix for my rc-local.service
    Thanks
    Last edited by RGoose (2012-09-01 14:53:40)

    $ sudo journalctl -b | grep tmpfile
    Aug 31 12:33:50 Tycho systemd-tmpfiles[1886]: stat(/run/user/1000/gvfs) fail...d
    This is the only thing with reference to tmpfile that I can find
    grepping for "backlight" "systemd" "set" "tmp" results in nothing for tmpfiles aside from the above posted
    everything for systemd
    $ sudo journalctl -b | grep systemd
    Aug 31 12:18:51 Tycho kernel: Command line: root=/dev/sda3 ro init=/bin/systemd
    Aug 31 12:18:51 Tycho systemd-udevd[49]: starting version 188
    Aug 31 12:18:51 Tycho systemd[1]: systemd 189 running in system mode. (+PAM...h)
    Aug 31 12:18:51 Tycho systemd[1]: Inserted module 'autofs4'
    Aug 31 12:18:51 Tycho systemd[1]: Set hostname to <Tycho>.
    Aug 31 12:18:51 Tycho systemd-udevd[130]: starting version 189
    Aug 31 12:18:51 Tycho systemd-journal[133]: Journal started
    Aug 31 12:18:51 Tycho arch-modules-load[124]: /usr/lib/systemd/arch-modules-...y
    Aug 31 12:18:51 Tycho systemd-fsck[269]: /dev/sda1: clean, 29/26104 files, 2...s
    Aug 31 12:18:51 Tycho systemd-udevd[140]: renamed network interface wlan0 to...0
    Aug 31 12:18:51 Tycho dbus[321]: [system] Activating systemd to hand-off: s...e'
    Aug 31 12:18:51 Tycho systemd-logind[316]: Watching system buttons on /dev/i...)
    Aug 31 12:18:51 Tycho systemd-logind[316]: Watching system buttons on /dev/i...)
    Aug 31 12:18:51 Tycho systemd-logind[316]: Watching system buttons on /dev/i...)
    Aug 31 12:18:51 Tycho systemd-logind[316]: Watching system buttons on /dev/i...)
    Aug 31 12:18:51 Tycho systemd-logind[316]: New seat seat0.
    Aug 31 12:18:51 Tycho systemd[1]: rc-local.service: main process exited, co...=1
    Aug 31 12:18:51 Tycho systemd-logind[316]: Watching system buttons on /dev/i...)
    Aug 31 12:18:51 Tycho systemd[1]: Unit rc-local.service entered failed state.
    Aug 31 12:18:52 Tycho dbus[321]: [system] Activating via systemd: service n...e'
    Aug 31 12:18:52 Tycho systemd-logind[316]: Watching system buttons on /dev/i...)
    Aug 31 12:18:52 Tycho systemd[1]: Startup finished in 874ms 899us (kernel) ...s.
    Aug 31 12:18:53 Tycho dbus[321]: [system] Activating via systemd: service n...e'
    Aug 31 12:18:53 Tycho dbus[321]: [system] Activation via systemd failed for...s.
    Aug 31 12:18:54 Tycho dbus[321]: [system] Activating via systemd: service n...e'
    Aug 31 12:18:54 Tycho dbus[321]: [system] Activation via systemd failed for...s.
    Aug 31 12:18:55 Tycho dbus[321]: [system] Activating via systemd: service n...e'
    Aug 31 12:18:55 Tycho dbus[321]: [system] Activation via systemd failed for...s.
    Aug 31 12:18:56 Tycho dbus[321]: [system] Activating via systemd: service n...e'
    Aug 31 12:18:56 Tycho dbus[321]: [system] Activation via systemd failed for...s.
    Aug 31 12:18:57 Tycho systemd-logind[316]: New session 1 of user jobbins.
    Aug 31 12:18:57 Tycho systemd-logind[316]: Linked /tmp/.X11-unix/X0 to /run/....
    Aug 31 12:18:57 Tycho dbus[321]: [system] Activating via systemd: service n...e'
    Aug 31 12:18:57 Tycho dbus[321]: [system] Activating via systemd: service n...e'
    Aug 31 12:18:58 Tycho dbus[321]: [system] Activating via systemd: service n...e'
    Aug 31 12:33:50 Tycho systemd-tmpfiles[1886]: stat(/run/user/1000/gvfs) fail...d

  • Using a custom certificate store for SCCM 2012 clients and primary site server

    I have read what seems to be all the pki related documentation out there for SCCM 2012. I have a PKI infrastructure up and running issueing certificates with an offline root through group policy autoenrollment. The problem that i'm faced with is we are migrating
    from SCCM 2007 that was in native mode and we chose not to use the CA that we used for the old SCCM environment. When the clients attempt to communicate with the M.P. it runs through all of the different certificates and adds a tremendous amount of overhead
    to the M.P. We will have ten's of thousands of clients by migration end. Could someone please point me to a document that goes over how to leverage a custom certificate store that I could then tell the new 2012 environment to use? I know that it's in there,
    I've seen it in the console. The setup is one primary site server with SQL on box and the pki I just mentioned as well as the old 2007 environment that is still live.
    I read that you can try and use SAN as a method of identifying the new certs but I haven't found a good document covering exactly how that works. Any info you could provide I would be very grateful for. Thanks.

    Jason, thank you for your reply. I'm getting the impression that you have never been in the situation where you had to deal with 2 different PKI environments. Let me state that I understand what your saying about trust. We have to configure the trusted root
    CA via GPO. That simply isn't enough, and I have a valid example to backup this claim. When the new clients got the advertisement and began the ccmsetup process I used the /pki switch among others. What the client end up doing was selecting a certificate that
    had the longest validity period which was issued by our old CA. It checked the authentication chain, found it to be valid and selected it for communication. At that point the installation failed, period, no caveats as you say. The reason the install failed
    because the new PKI infrastructure is integrated into the new environment, and the old is not. So when you said " that
    are trusted and they can use *any* cert that is trusted because at the end of the day, there is no
    difference between two valid certs that have the same purpose as long as they are trusted. "
    that is not correct. Both certs are trusted, and use the same certificate template, but only one certificate would allow the install to complete successfully.
    Once I started using the CCMCERTISSUERS
    switch the client install went swimmingly. The only reason I'm still debating this point is because someone might read this thread see your comments and assume "well I've got my new PKI configured as a trusted root CA, I should be all set" and their
    deployment will fail, just as my pilot did.
    About Intune I'm looking forward to doing a POC in the lab i built with my Note 3. I'm hoping it goes well as I really want to have our MDM migrated into ConfigMgr... I think the
    biggest obstacle outside of selling it to management will be the actual device migration from the current MDM solution. From what I understand of the enrollment process manual install and config is the only path forward.
    Thanks Jason for your post and discussion.

  • FEP 2010

    Hi,
    I have few queries in FEP 2010
    1. Conficker virus is frequently reappearing on the FEP clients even after removal. How to get rid of this completely.
    2. Unable to specify the override on the policy as it says "the specified threat could not be found in the definitions. Verify that forefront endpoint protection has the most up to date definition" even though the definitions are up to date.
    3. Frequent failure of
    SQL Server Scheduled Job 'FEP_GetNewData_FEPDW_XXX' . as per the article
    http://blogs.technet.com/b/clientsecurity/archive/2011/01/24/fep-data-collection-job-fails-periodically.aspx  it says
    known issue. Does installing update rollup1 fix this issue.
    4. Reporting part- The subreports  (antimalware protection history) are showing when we select  report time span for
    DAY  but when we select it as WEEK its displaying with error "Subreport could not be shown". As seen from article
    http://blogs.technet.com/b/clientsecurity/archive/2011/05/19/forefront-endpoint-protection-fep-2010-fep-reports-may-not-display-properly.aspx  Does installing Cumulative Update Packages for Microsoft SQL Server 2008 solves this issue.

    Enforce an update and run a full system scan in those infected clients and if problem persist, use Windows Defender Offline and boot into those PCs and run a full system scan:
    http://windows.microsoft.com/en-US/windows/what-is-windows-defender-offline
    Make sure Windows Update is running and they are update.

  • Material Ledger Error CKMLWIP099

    I have been working with Material LEdger (ML) for several years and was deeply involved in the configuration and testing prior to deployment. I have recently come across a problem I can find no resolution for. The message is CKMLWIP099. The text is as follows:
    "Internal error in FORM/FUNCTION CL_CKML_WIP_DBACCESS->QS_COMPLETE in position 5 with RC"
    The issue occurs when running the WIP Revaluation step within CKMLCP Costing Cockpit.
    Our version is 4.7. I tried wating until the material period was two periods old and using the "Must_WIP" tip, but the same error happened. Now, the few parts with this error have infected 10 times as many other parts. I have reviewed the materials as well as all movement types and can not find a logical problem.
    If anyone has any ideas or advice I would appreciate it greatly. I have logged an SAP help ticket, but that could take a long time and become very difficult.
    Edited by: Juggler2010 on Dec 15, 2010 6:11 PM
    We opened a ticket with SAP.  They found that some PP orders from 2007 popped up in Oct-2010 for which WIP had been run.  In my experience there was a time that PP orders RA was accidentally run for a period far into the future, which caused ML to break for every part involved.  They had given us a special program to cure that.  I ran that program before opening the ticket, but it found nothing to fix.  This time, SAP asked us to implement note 1497118, which installs the program WIP_HELPDESK.  That program should only be run by SAP.  After deleting the bad WIP qty records, I ran material ledger for the failed periods with full success.  Problem resolved.

    This may help others who experience this unusual issue.

  • SNC on Clustered BOBJ 3.1

    Hello --
    We are taking advantage of SNC to schedule publications that can be used in Xcelsius dashboards via LiveOffice, but this seems to fail periodically due to our clustered environment.  We have 3 nodes in the cluster, and the Publications succeed as long as (the correct) one Adaptive Job Server is enabled, and the other two are disabled.  It has to be a specific one, and it has changed when the servers were rebooted....Publications then fail with the "Unable to reconnect to the CMS <SERVERNAME>:6400. The session has been logged off or has expired. (FWM 01002)" error until the right Adaptive Job Server is enabled and the others disabled.
    This seems to be a problem with the combination of SNC and the cluster.  Any ideas?
    Thanks for any help!
    Casey

    Hi,
    It looks that the problem is due to your email server. Maybe, for this publications, the email address are different and your server is changing the attachment.
    Try to send this publication to only one person and see if the behaviour is the same.
    Another test is to send it to destination: FTP or file and see if that works.
    Ask your mail administrator if there is some information in the logs for this particular mail.
    Regards,
    Julian

  • Any issues using FreeAgent Desk with Time Machine?

    I have a Free Agent Desk product #9ZC2AG-501, 1 TB.
    I reformatted it for HFS+ and use it on USB (this model only has a USB interface) with a hub and my Mac Book Pro running Lion (10.7.2).
    Although I have observed this "problem" with previous O/S versions I think it is happening more often now.
    Time Machine will make the initlial backup fine (takes 6-8 hours), then it will make incremental backups fine for several weeks, maybe 2 months.
    Then I will get a TM error.  And next reboot, I will usually get a "OSX Can't fix this drive.." error message - I can see a disk repair task runs first, I assume the O/S does a check before it mounts the drive.  Sometimes if I run Disk Utility and ask it to verify or repair the drive or volume it completes and the drive works again.
    More often latelty, it will tell me it can't fix the drive and I should save what I can and start over.
    Since its a backup anyway, I just erase or re-partition it and TM starts up again.
    And the scenario repeats.
    The drive errors varry.  Usually very drastic-looking things like "scrambled index" or "corrupted index" or "invalid structure".
    I don't see how or why an external drive would care what O/S is being used with it.  But I would like to know if there is some deep technical reason this model cannot be used with Time Machine?  So I can either fix it or know to give up.
    The USB info for the hub and drive is:
    Hub:
      Product ID:          0xf103
      Vendor ID:          0x2001  (D-Link Corporation)
      Version:          1.00
      Speed:          Up to 480 Mb/sec
      Location ID:          0xfa400000 / 2
      Current Available (mA):          500
      Current Required (mA):          0
    FreeAgent       :
      Capacity:          1 TB (1,000,204,884,992 bytes)
      Removable Media:          Yes
      Detachable Drive:          Yes
      BSD Name:          disk1
      Product ID:          0x3008
      Vendor ID:          0x0bc2  (Seagate LLC)
      Version:          1.38
      Serial Number:          2HC015KJ
      Speed:          Up to 480 Mb/sec
      Manufacturer:          Seagate
      Location ID:          0xfa430000 / 5
      Current Available (mA):          500
      Current Required (mA):          2
      Partition Map Type:          GPT (GUID Partition Table)
      S.M.A.R.T. status:          Not Supported
      Volumes:
    disk1s1:
      Capacity:          209.7 MB (209,715,200 bytes)
      BSD Name:          disk1s1
      Content:          EFI
    RcookeBU:
      Capacity:          999.86 GB (999,860,912,128 bytes)
      Available:          644.05 GB (644,050,997,248 bytes)
      Writable:          Yes
      File System:          Journaled HFS+
      BSD Name:          disk1s2
      Mount Point:          /Volumes/RcookeBU
      Content:          Apple_HFS

    I have this same drive. It's a USB drive intended for PCs not Macs. At the time, Seagate offered Free Agent drives specifically for Macs. They weren't just pre-formatted to HFS+. They also came with different energy-saving settings in the firmware. The Free Agent Desk for PC that you and I have has additional energy-saving features designed for Windows-only. When used with a Mac, the drive will "sleep" or spin-down at the wrong time, even during Time Machine backups, resulting in file system corruption on the Seagate drive.
    Seagate support recommended disabling the drive's sleep features using a Windows utility, but that didn't actually solve the problem. In fact, there is no way to completely eliminate the problem, but you can reduce it greatly by disabling sleep and hard disk sleep on your Mac (via System Preferences).
    For me, disabling sleep completely allows me to use Time Machine on the disk, but clones will still fail periodically.
    You can try to live with it or put it on Craigslist and sell it to someone with a PC.

  • XANO_TA exception at commit time : WLE/Tuxedo

    Hi,
    We have the following basic setup:
    1. Oracle 8.1.6
    2. A WLE transactional C++ server (XA)
    3. A WLE client
    The XA transactions are initiated explicitly (by calling Current->begin,
    Current-> in the C++ server. Periodically, the commit() fails. The error in
    the xa_* log is XAERR_NOTA (see trace below). I would be interested in
    anyone who has seen this problem before or who may know why this issue is
    occurring.
    Thanks
    Dermot
    Traces (when it works and when it fails)
    We have enabled full XA debugging by setting the following in the
    environment:
    TMTRACE=atmi+iatmi+xa:ulog:dye
    In the ULOG, when the transaction works, we get:
    133021.matrix2!TMS_ORA.17343: gtrid x0 x3ae81420 x1: TRACE:ia:
    tpservice({"TMS", 0x10, 0x0, 0, 0, 0, {0, -2, -1}})
    133021.matrix2!TMS_ORA.17343: gtrid x0 x3ae81420 x1: TRACE:xa:
    xa_commit(0x3178c, 0, 0x40000000)
    133021.matrix2!TMS_ORA.17343: gtrid x0 x3ae81420 x1: TRACE:xa: }
    xa_commit = 0
    In the xa... trace log we see:
    133021.17345.0: xaostart:
    xid=0x28-6d61747269783200000000000000000000000000000000000000000000000000000
    4d73c3ae8142000000001-0000000300000003, rmid=0, flags=0x0
    133021.17345.0: OCITransStart: Attempting
    133021.17345.0:OCITransStart: Succeeded
    133021.17345.0: xaostart: return XA_OK
    133021.17345.0:xaoend:
    xid=0x28-6d61747269783200000000000000000000000000000000000000000000000000000
    4d73c3ae8142000000001-0000000300000003, rmid=0, flags=0x4000000
    133021.17345.0:OCITransDetach: Attempting
    133021.17345.0:OCITransDetach: Succeeded
    133021.17345.0:xaoend: return 0
    133021.17343.0:xaocommit:
    xid=0x28-6d61747269783200000000000000000000000000000000000000000000000000000
    4d73c3ae8142000000001-0000000300000003, rmid=0, flags=0x40000000
    133021.17343.0:OCITransCommit: Attempting
    133021.17343.0:OCITransCommit: Succeeded
    133021.17343.0:xaocommit: rtn 0
    In the ULOG, when the transaction fails, we get:
    133102.matrix2!TMS_ORA.17344: gtrid x0 x3ae81420 x3: TRACE:ia:
    tpservice({"TMS", 0x10, 0x0, 0, 0, 0, {0, -2, -1}})
    133102.matrix2!TMS_ORA.17344: gtrid x0 x3ae81420 x3: TRACE:xa:
    xa_commit(0x3178c, 0, 0x40000000)
    133102.matrix2!TMS_ORA.17344: gtrid x0 x3ae81420 x3: TRACE:xa: }
    xa_commit = -4
    133102.matrix2!TMS_ORA.17344: gtrid x0 x3ae81420 x3: CMDTUX_CAT:423: WARN:
    One-phase commit - xa_commit returned XAER_NOTA
    And the xa trace file contains:
    133102.17345.0:xaostart:
    xid=0x28-6d61747269783200000000000000000000000000000000000000000000000000000
    4d73c3ae8142000000003-0000000300000003, rmid=0, flags=0x0
    133102.17345.0:OCITransStart: Attempting
    133102.17345.0:OCITransStart: Succeeded
    133102.17345.0:xaostart: return XA_OK
    133102.17345.0:xaoend:
    xid=0x28-6d61747269783200000000000000000000000000000000000000000000000000000
    4d73c3ae8142000000003-0000000300000003, rmid=0, flags=0x4000000
    133102.17345.0:OCITransDetach: Attempting
    133102.17345.0:OCITransDetach: Succeeded
    133102.17345.0:xaoend: return 0
    133102.17344.0:xaocommit:
    xid=0x28-6d61747269783200000000000000000000000000000000000000000000000000000
    4d73c3ae8142000000003-0000000300000003, rmid=0, flags=0x40000000
    133102.17344.0:OCITransCommit: Attempting
    133102.17344.0:OCITransCommit return code: -1
    133102.17344.0:ORA-00000: normal, successful completion
    133102.17344.0:xaocommit: rtn -4

    Hi Peter,
    The only eason for that which springs to mind is that it got committedsomewhere
    outside of the TMS that is logging the error you attached. Is thatpossible in
    your environment?We originally thought this may be the case but our server is stripped
    down at this stage so it should be the only done performing the commit. We
    do have external servers performing their own transactions (they are mostly
    not XA and they would have their own transaction id).
    Another server is involved in the global transaction that fails
    periodically but it does not commit if it is already in a transaction i.e
    check the transactions::Current ...
    Also, as I said, we stripped down our server so the second server was
    removed and the issue still appeared periodically.
    Or maybe in some code paths you start the transaction and don't access thedatabase?
    Could you elaborate on this please?
    Thanks
    Dermot
    Peter Holditch <[email protected]> wrote in message
    news:[email protected]...
    Dermot,
    XAER_NOTA meas that the RM doesn't know about the transaction.
    The only eason for that which springs to mind is that it got committedsomewhere
    outside of the TMS that is logging the error you attached. Is thatpossible in
    your environment? Or maybe in some code paths you start the transactionand
    don't access the database?
    Regards,
    Peter.
    Got a Question? Ask BEA at http://askbea.bea.com
    The views expressed in this posting are solely those of the author, andBEA
    Systems, Inc. does not endorse any of these views.
    BEA Systems, Inc. is not responsible for the accuracy or completeness ofthe
    information provided
    and assumes no duty to correct, expand upon, delete or update any of the
    information contained in this posting.
    . wrote:
    Hi,
    We have the following basic setup:
    1. Oracle 8.1.6
    2. A WLE transactional C++ server (XA)
    3. A WLE client
    The XA transactions are initiated explicitly (by calling
    Current->begin,
    Current-> in the C++ server. Periodically, the commit() fails. The errorin
    the xa_* log is XAERR_NOTA (see trace below). I would be interested in
    anyone who has seen this problem before or who may know why this issueis
    occurring.
    Thanks
    Dermot
    Traces (when it works and when it fails)
    We have enabled full XA debugging by setting the following in the
    environment:
    TMTRACE=atmi+iatmi+xa:ulog:dye
    In the ULOG, when the transaction works, we get:
    133021.matrix2!TMS_ORA.17343: gtrid x0 x3ae81420 x1: TRACE:ia:
    tpservice({"TMS", 0x10, 0x0, 0, 0, 0, {0, -2, -1}})
    133021.matrix2!TMS_ORA.17343: gtrid x0 x3ae81420 x1: TRACE:xa:
    xa_commit(0x3178c, 0, 0x40000000)
    133021.matrix2!TMS_ORA.17343: gtrid x0 x3ae81420 x1: TRACE:xa: }
    xa_commit = 0
    In the xa... trace log we see:
    133021.17345.0: xaostart:
    xid=0x28-6d61747269783200000000000000000000000000000000000000000000000000000
    4d73c3ae8142000000001-0000000300000003, rmid=0, flags=0x0
    133021.17345.0: OCITransStart: Attempting
    133021.17345.0:OCITransStart: Succeeded
    133021.17345.0: xaostart: return XA_OK
    133021.17345.0:xaoend:
    xid=0x28-6d61747269783200000000000000000000000000000000000000000000000000000
    4d73c3ae8142000000001-0000000300000003, rmid=0, flags=0x4000000
    133021.17345.0:OCITransDetach: Attempting
    133021.17345.0:OCITransDetach: Succeeded
    133021.17345.0:xaoend: return 0
    133021.17343.0:xaocommit:
    xid=0x28-6d61747269783200000000000000000000000000000000000000000000000000000
    4d73c3ae8142000000001-0000000300000003, rmid=0, flags=0x40000000
    133021.17343.0:OCITransCommit: Attempting
    133021.17343.0:OCITransCommit: Succeeded
    133021.17343.0:xaocommit: rtn 0
    In the ULOG, when the transaction fails, we get:
    133102.matrix2!TMS_ORA.17344: gtrid x0 x3ae81420 x3: TRACE:ia:
    tpservice({"TMS", 0x10, 0x0, 0, 0, 0, {0, -2, -1}})
    133102.matrix2!TMS_ORA.17344: gtrid x0 x3ae81420 x3: TRACE:xa:
    xa_commit(0x3178c, 0, 0x40000000)
    133102.matrix2!TMS_ORA.17344: gtrid x0 x3ae81420 x3: TRACE:xa: }
    xa_commit = -4
    133102.matrix2!TMS_ORA.17344: gtrid x0 x3ae81420 x3: CMDTUX_CAT:423:WARN:
    One-phase commit - xa_commit returned XAER_NOTA
    And the xa trace file contains:
    133102.17345.0:xaostart:
    xid=0x28-6d61747269783200000000000000000000000000000000000000000000000000000
    4d73c3ae8142000000003-0000000300000003, rmid=0, flags=0x0
    133102.17345.0:OCITransStart: Attempting
    133102.17345.0:OCITransStart: Succeeded
    133102.17345.0:xaostart: return XA_OK
    133102.17345.0:xaoend:
    xid=0x28-6d61747269783200000000000000000000000000000000000000000000000000000
    4d73c3ae8142000000003-0000000300000003, rmid=0, flags=0x4000000
    133102.17345.0:OCITransDetach: Attempting
    133102.17345.0:OCITransDetach: Succeeded
    133102.17345.0:xaoend: return 0
    133102.17344.0:xaocommit:
    xid=0x28-6d61747269783200000000000000000000000000000000000000000000000000000
    4d73c3ae8142000000003-0000000300000003, rmid=0, flags=0x40000000
    133102.17344.0:OCITransCommit: Attempting
    133102.17344.0:OCITransCommit return code: -1
    133102.17344.0:ORA-00000: normal, successful completion
    133102.17344.0:xaocommit: rtn -4

  • Mds is driving me CRAZY using up system resources

    A couple weeks ago, I noticed that mds was using a crazy amount of memory in system processes. I hadn't really noticed it before (I normally use the iStat Pro Widget that only lists the top 5), but now it normally uses 100+ MB of memory, sometimes even as high as 300.
    I searched on the forum and tried to implement a number of solutions, most related to Time Machine. I even did an "Archive and Install", and everything failed. The mds popped up again and again.
    Since Time Machine would fail periodically with an error message, and I noticed in recent weeks that its backups would be abnormally large (for example, a backup from one hour to the next would be several gigabytes larger, even though I was only web browsing and had not made significant changes to my files). In fact, my time machine went through hundreds of gigabytes in space in just a couple of days, even though it had go months going up incrementally.
    I used Spotless to delete and rebuild my spotlight index, thinking it might be corrupt, but again, the mds continued to go berzerk.
    I eventually got fed up trying to narrow down the problem, and went to the nuclear option. I not only did an erase and install of Leopard, but I deleted my TIme Machine as well and started over. I installed all my apps from scratch, and got rid of the ones I didn't use anymore in the process. I had a cloned copy of my drive created by SuperDuper, and copied over my document and music folders, but everything else is from scratch.
    Still, the mds problem persists.
    I took the step of excluding Time Machine from spotlight, and that doesn't help either. I manually kill mds in activity monitor, but it soon comes back.
    I don't know what to do. By contrast, my Air normally shows mds running under 40MB of memory. I can't believe that 100MB+ is normal. Is it?
    Any suggestions?

    Kevin_C wrote:
    Since Time Machine would fail periodically with an error message, and I noticed in recent weeks that its backups would be abnormally large (for example, a backup from one hour to the next would be several gigabytes larger, even though I was only web browsing and had not made significant changes to my files). In fact, my time machine went through hundreds of gigabytes in space in just a couple of days, even though it had go months going up incrementally.
    This may or may not be related. Without knowing what error message you get, we'd just be guessing.
    Download the +Time Machine Buddy+ widget from: http://www.apple.com/downloads/dashboard/status/timemachinebuddy.html. It shows the messages from your logs for one TM backup run at a time, in a small window. Navigate to a failed backup, then copy and post the messages here (be sure to get them all, as sometimes they overflow the small window).
    For backups being unexpectedly large, there are several possibilities. One is that some apps use one large file, often a database, to hold all their data, instead of individual files. Entourage is, I'm told, one of those. So sending or receiving a single tiny message results in the whole database being marked as changed, and backed-up. With Apple Mail, however, each message is in a single file, so incremental backups only copy the small file(s) that changed.
    You might want to download the TimeTracker app, from www.charlessoft.com.
    It shows most of the files saved by TM for each backup (excluding some system files, etc.). Look through it's results to see what's being saved. Depending on what you find, post back and we'll see if we can figure it out.
    Back to your original question about memory, though. The amount of memory used by mds may or may not be a problem. When a process no longer needs some memory it's been using, OSX marks it "inactive", but doesn't necessarily remove it from that app.
    Inactive memory is a pool that's available for use by other processes, but OSX will make new assignments from Free memory first. This is to speed re-assignment if the original process needs memory again.
    What you need to look at are, first, the amount of Free memory. If there's much there, then mds isn't hogging that 100-300 mb all the time. Next, check the rate of Page Outs towards the bottom of the Activity Monitor screen. If that's zero or low, it means you have enough memory so OSX isn't swapping virtual memory at a high rate.

  • Need opinions about PSU

    Hello there,
    I am a owner of MSI i865PE NEO2 LS mother board with P4 2.4GHz(800) Northwood CPU, 2x 256MB RAM and GF4 Ti4200
    This system is giving me headackes from day one, and for 3 years now. It is great and stable system, but there is one little dissadvantage. AGP, PCI and MEM slots. They are not welded properly and causing system crashes if there is a slight changes. Basicaly you must leave tha chassis alone and not try to upgrade if you want to boot it up... That's my experience with my MB.
    I've changed PSU cuple of times, because I thought PSU is the cause of system crashes, but recently I realized that it is the AGP slot.
    But this isn't the issue now. The problem I am dealing with now is PSU. When I turned on my PC, Windows XP gave me a blue screen od death, and when I restarted the system went to Windows normaly. But I have realized that one of my CD Drives is not working, I could not open the CD Drive to put a CD in. Then I pluged out the power core and used another one, and when started the system it booted normaly and I could open the CD Drive.
    My question is. What could cause this? Is it possible that the PSU overheated or is it near it's fail period and I must consider in buying a new one. PSU is as old as system (3 years)
    A nother issues I am having are system lockups, but i believe it's the heat... It is summer and it is realy hot in my room ,even system temps are near 44°C, they used to be 36-38.
    Is it possible that the PSU is heating the system to much, and maybe dust is the cause of overheating.
    Thanx you all..

    For 3 years, only until now you said it gave you headaches...
    System temp is pretty high. Also clean up all the dust in the chassis, including all the fans- CPU, chipset, graphics cards and PSU fan, not sure if you could clean it up. Try cleaning up everything first. There are lots of dust trapped in the fins of the heatsink, clean all of them. Also, reapply CPU Thermal grease on your CPU. Clean it up first before you apply the new thermal grease.

  • G4 Display Failure

    My power book G4 display fails periodically. Sometimes when I start my G4 I get no video but computer starts??? Then I shut it down and start it the next day and all of a sudden it works.

    Here's some helpful laptop sites.
    Fix It Guides for Mac Laptops & Mini
    http://www.ifixit.com/Guide/
    The guides can be viewed on-line or you can download a PDF file.
    How to Upgrade, Repair, Disassemble an Apple/Macintosh Laptop or Notebook
    http://repair4laptop.org/disassembly_apple.html
    PowerBookTech
    http://www.powerbooktech.com/
    (Note - PowerBookTech has some good info, however, some Mac users in other forums were having difficulty getting delivery on parts from them.)
    How to Upgrade, Repair, Disassemble an Apple/Macintosh Laptop or Notebook
    http://repair4laptop.org/disassembly_apple.html
    Laptop Repair Guides
    http://www.applerepairmanuals.com/#portables
     Cheers, Tom

  • Jabber for Windows - periodically tries to re-install and fails

    Jabber 9.1
    CUCM 7.1.3
    Windows 7 32 bit
    Cisco Presence   8.0.2.10000-30
    Jabber for windows - periodically tries to re-install and fails. After install Jabber works for awhile but then when trying to open another program(Outlook, IE Explorer) it tries to re-install.  If I go to the directory where the install files are and do repair it works.  This seems like a conflict with another application.
    I see this in the Windows application event log:
    Detection of product '{4EB9D7DD-65B5-44ED-B877-CE3EF9B4530F}', feature 'Cisco_Jabber_Files', component '{7CB949BE-2270-4992-9C4C-8FDDB90F6FE2}' failed. The resource 'C:\Program Files\Cisco Systems\Cisco Jabber\JabberMeeting\DesktopShare\atgpcdec.dll' does not exist.
    Jabber 9.1
    CUCM 7.1.3
    Windows 7 32 bit
    Cisco Presence   8.0.2.10000-30

    Hi,
    Do you have WebEx Productivity plugin installed on the  machine? I can reproduce this problem after exiting Jabber, creating a  One-click meeting from Outlook and restarting Jabber. Please provide  further information to help me open a defect.
    - What was the version of Jabber on this machine prior to 9.1.0?
    - What is the version of the WebEx productivity tool plugin (if it is installed)?
    - Is there a particular sequence that you can follow to reproduce it everytime?
    Thanks,
    Maqsood

Maybe you are looking for

  • KMS Host server 2012 R2

    I recently migrated our KMS server to a 2012 R2 box and I cannot get the count to budge now.According to this article our KMS key should now activate 2008 r2, 7 Ent, 2012 R2.7. Enter your KMS Host Key (CSVLK). Note: You can only install a Windows Ser

  • Error in Service Starting

    I enter below command and it has below text : C:\Users\Administrator>emctl status dbconsole Oracle Enterprise Manager 11g Database Control Release 11.2.0.1.0 Copyright (c) 1996, 2010 Oracle Corporation. All rights reserved. https://BRGSoft:5501/em/co

  • Has anyone included an AUP Page as part of a PEAP Auth process on ISE?

    I was wondering if somebody has ever tried to include UAP in a PEAP wireless connection using ISE. I am still doing tests and trying to make it work. Any general ideas are well-received. thanks

  • Jdev 11.1.2.3 RedHat 5.8 - AF:Switcher Question

    Hello: Can AF:Switcher component be used in a page Fragment? Also, I'm using the method of creating functional subsystems as adflibs and then combining these subsystems into a master project to generate the EAR. Would the subsystems only use page fra

  • Starting Oracle Enterprise Management Console

    I have oracle11gR2 database instance up and running on RHEL and i am able to connect to this instance using sqldeveloper. I need to have access to enterprise manager console too. I tried to start enterprise manager using following command [ora112@loc