Report not being written to network storage

Hello again everybody,
My application provides the user with "on demand" report displays, which, after much mucking around are now working reliably (again).  The last thing the application does is create a set of CSV files and write a set of reports to a couple of network storage locations.  My problem is that one of the reports never shows up in its network folder.  Or any network folder as far as I can tell.
When I request the "on demand" version of the report, it's displayed onscreen correctly via a form using the Crystal Reports Viewer control.  When the application creates it, it's MIA.
The other six reports in the various network locations are all present and accounted for.  I've tried to debug this by displaying status message boxes, and the information they show me is correct (report pathnames, destination pathnames, report file names, parameters, database connections, etc).  I have try/catch structures in every subroutine and function and log errors to a DB table, but never receive any errors; at runtime or in the database table.
The report also works fine via the CR IDE.
One final brain teaser - the application works perfectly in my development environment, both in Debug mode and when the application has been built.  The problem shows up when I move to a Test environment.  I repeat: the other reports are generated without fail...
Development environment:
Windows XP Pro SP 3
VB .NET 2003
Oracle 10g
CR XI Rev 1
Test Environment:
Win Server 2003 SP 2
Oracle 10g
CR XI Merge modules
Regards,
Mark Baran
Senior Analyst / Developer
American Agricultural Insurance Company, Inc.

An interesting issue...
The utility [Process Monitor|http://technet.microsoft.com/en-ca/sysinternals/bb896645.aspx] may be quite useful here.
Make sure you establish a filter for your process as procmon like to create large log files. Once you have a log, search it for the report name and see what is happening to it.
Ludek

Similar Messages

  • Ocrfile is not being written to.  open file issues.  Help please.

    I've been troubleshooting an open file issue on our Test environment for quite a while now. Oracle has had me update to latest CRS bundle for 10.2.0.3, then upgrade to 10.2.0.4, then two more patches via OPatch to bring 10.2.0.4 RAC to it's most recent patch. None of these patches resolved our problem. We have ~8700 datafiles in the database and once the database is started, we're at ~11k on Production but on Test we're at ~37K or higher. It takes 1-2 days to hit the 65536 limit before it crashes. I have to 'bounce' the database to keep it from crashing. Yes, I could raise the ulimit but that isn't solving the problem.
    Over the weekend I noticed that on Production and DEV, the ocrfile is being written to constantly and has a current timestamp but on Test, the ocrfile has not been written to since the last OPatch install. I've checked the crs status via 'cluvfy stage -post crsinst -n all -verbose' and everything comes back as 'passed'. The database is up and running, but the ocrfile is still timestamped at April 14th and open files jump to 37k upon opening the database and continue to grow to the ulimit. Before hitting the limit, I'll have over 5,000 open files for 'hc_<instance>.dat, which is where I've been led down the path of patching Oracle CRS and RDBMS to resolve the 'hc_<instance>.dat bug which was supposed to be resolved in all of the patches I've applied.
    From imon_<instance>.log:
    Health check failed to connect to instance.
    GIM-00090: OS-dependent operation:mmap failed with status: 22
    GIM-00091: OS failure message: Invalid argument
    GIM-00092: OS failure occurred at: sskgmsmr_13
    That info started the patching process but it seems like there's more to it and this is just a result of some other issue. The fact that my ocrfile on Test is not being written to when it updates frequently on Prod and Dev, seems odd.
    We're using OCFS2 as our CFS, updated to most recent version for our kernel (RHEL AS 4 u7 -- 2.6.9-67.0.15.ELsmp for x86_64)
    Any help greatly appreciated.

    Check Bug... on metalink
    if Bug 6931689
    Solve:
    To fix this issue please apply following patch:
    Patch 7298531 CRS MLR#2 ON TOP OF 10.2.0.4 FOR BUGS 6931689 7174111 6912026 7116314
    or
    Patch 7493592 CRS 10.2.0.4 Bundle Patch #2
    Be aware that the fix has to be applied to the 10.2.0.4 database home to fix the problem
    Good Luck

  • IO Labels are NOT being written to prefs!

    I've followed all the advice on existing threads about this and it's definitely a serious bug for me. IO Labels are not being written to prefs for me.
    Any ideas? Already tried deleting prefs, creating new blank project and not saving, nothing has worked.

    I found a workaround for anyone having this issue - and this is the ONLY thing that has worked after a week of trying everything on several forums.
    Open Logic, set your labels how you want.
    While Logic is open go to ~/Library/Preferences and delete com.apple.logic.pro.plist
    Quit Logic. I don't think it matters whether you save your project or not. When Logic quits a new plist will be written, and this one WILL have your labels!
    Seems on my machine Logic would not update the IO labels bit of the prefs unless it was writing a complete new prefs file.

  • Time capsule not being recognized by network,  Airport working fine.

    Time capsule is not being recognized by network.  Tried unplugging and restarting, but no go.  Airport is working fine and network is up and running.

    Try just doing a reset to the TC..
    An upgrade to lion might just put you into the fire.. even the frying pan is better.
    I would suggest you downgrade the firmware on the TC to 7.5.2 as later firmware is more buggy and adds nothing for OS earlier than Lion.. on the contrary it causes more issues.
    This will not harm your existing backups.
    Are the two computers both connected by ethernet?? Is the connection issue over wireless?
    Give a bit more info on the network layout. Mode of the TC.. are any computers using ethernet?

  • Reports not being Generated

    I use Crystal Reports with another program that supposed to generate 2 reports with a click of a button, but it is not working. The reports are not being generated. I installed Crystal Reports on my system twice and there doesn't seem to be a problem with the installation, but the program is not working. Please help.

    I am fairly new at this so let me try my best to answer these questions:
    Crystal Reports  XI
    Visual Studio? Not sure
    No service pack applied. How and where do i get it?
    I am using Windows XP
    I am not viewing the reports as it has to be generated before i am able to view and the systems does nothing after I click the generate button
    No  have not tried to view a single report; i don't know how.
    I have not used Crystal reports designer for anything else.
    Edited by: Sharon Aird on Dec 1, 2008 5:05 PM

  • ADDM report not being generated

    I noticed that addm reports are not being generated from last 2 days. So I did the following:
    SQL> exec dbms_workload_repository.create_snapshot;
    BEGIN dbms_workload_repository.create_snapshot; END;
    ERROR at line 1:
    ORA-13516: AWR Operation failed: only a subset of SQL can be issued
    ORA-06512: at "SYS.DBMS_WORKLOAD_REPOSITORY", line 10
    ORA-06512: at "SYS.DBMS_WORKLOAD_REPOSITORY", line 33
    ORA-06512: at line 1
    I got above error. I am not sure what that means. I also ran below SQL:
    SQL> select nam.ksppinm name, val.KSPPSTVL, nam.ksppdesc description
    2 from x$ksppi nam, x$ksppsv val
    3 where nam.indx = val.indx and
    4 nam.ksppinm = '_awr_restrict_mode'
    5 order by 1
    6 ;
    awrrestrict_mode
    FALSE
    AWR Restrict Mode
    What could be the reason that the reports are not being generated anymore

    Check Note:308003.1 - AWR Snapshots Not Generating

  • Hprof heap dump not being written to specified file

    I am running with the following
    -Xrunhprof:heap=all,format=b,file=/tmp/englog.txt (java 1.2.2_10)
    When I start the appserver, the file /tmp/englog1.txt gets created, but
    when I do a kill -3 pid on the .kjs process, nothing else is being written to
    /tmp/englog1.txt. In the kjs log I do see the "Dumping java heap..." message
    and a core file is generated.
    Any ideas on why I'm not getting anything else written to /tmp/englog1.txt?
    Thanks.

    Hi
    It seems that the option you are using is correct. I may modify it to something like
    java -Xrunhprof:heap=all,format=a,cpu=samples,file=/tmp/englog.txt,doe=n ClassFile
    This seems to work on 1.3.1_02, so may be something specific to the JDK version that
    you are using. Try a later version just to make sure.
    -Manish

  • Sections of report not being displayed on some systems

    Post Author: jlutz
    CA Forum: General
    Hello... strange issue... We have these Crystal 11 reports deployed that are being called via a VS .NET 2005 application.  The reports display fine on most systems, but on some systems (Dells, a few years old, 512mb of ram), sections of the report are not being displayed.  For example, on one report parts of the header are not displayed, nor is some of the subtotal sections etc.However, if they zoom to 75%, they can view the subtotal section but the header part disappears. Additionally, if they export or print the report, it comes out just fine! Any idea what could be going on here? Thanks! 

    Post Author: PeterLiebich
    CA Forum: General
    I have issues in ReportDesigner  when I have my hardware acceleration set at maximum try reducing it. Goto
    ControlPanel-> Display Properties (settings Tab) -> Advanced Button -> troubleShoot tab. I had to set my down by 4 notches before it worked. - Try setting it to none to see if it fixes your issue

  • Opmn logs not being written

    Hi All,
    We are facing an issue.
    No logs are being written to the opmn/logs directory. It was being written correctly till 4th December and then stopped all of a sudden.
    Are there any configuration files which may have been affected.
    Best regards,
    Brinda

    To clarify.
    We are now rotating the logfiles with the linux/unix command logrotate. I suspect that this is what is causing the issue that the logs are note being filled after rotation and we need to restart opmn for the logs to start getting populated.
    So I think we need to configure rotating logs in opmn.xml.
    The Application server version is 10.1.3. This is the log line in our opmn.xml.
    <log path="$ORACLE_HOME/opmn/logs/opmn.log" comp="internal;ons;pm" rotation-size="1500000"/>
    So the question is how do we activate opmn to rotate the log so that we do not need to use logrotate.
    In this document it says that you have to activate ODL for log rotation to work:
    http://download.oracle.com/docs/cd/B25221_04/web.1013/b14432/logadmin.htm#sthref370
    Is this true or can we rotate text logs as well. This is what we would prefer.
    Best regards,
    Gustav

  • EWA Report not being sent for two systems

    Hi Experts,
    We are having one issue in EWA Report in solution manager. Searched hard but could not find solution.
    Our solution manager is configured to provide EWA for 4 systems- ECC, BW, PI and Solution manager itself.
    The configurations are in place to the extend that we are able to view the weekly reports in Solution manager.
    Issue is -
    1. We are able to recieve mail notification for Solution Manager and ECC but not able to get for BW and PI.
    2. The ECC report is coming ok, but Solution manager report is blank.
    The places I checked the configuration -
    1. SDCCN for all systems.
    2. SOLMAN_EWA_ADMIN - all systems active and chcked for EWA
    3. Background Jobs for all systems -Maintenance Package, Refresh Session, Earlywatch Alert for Solution Manager
    4. Background Jobs related to the task including SM:EXEC SERVICES
    Later on I deleted and rescheduled the job SM:EXEC SERVICES to make sure it covers all systems.
    Data is visible in Solution Manager but report is not being sent over mail for two systems- BW and PI.
    Solution manager report is being sent but empty.
    Please suggest.
    Regards,
    Sabita

    Hi,
    what is your ST verison? check this note applicable or not
    [https://websmp230.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=1496931|https://websmp230.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=1496931]
    and chek the version of your service updated ST_SER too, patch this also solve the issue
    refer this [Note 1482818 - Service updates ST-SER 701_2010_1|https://websmp130.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=0001482818&nlang=E]
    Jansi

  • PSE10 Trial will not show WD MyBookLive network storage

    Elements 10 trial will not show my WD MyBookLive network storage device in Editor (File Open) or Organizer (Get Photos and Video From Files and Folders). I CAN use win7 explorer to open a folder on the drive and drag-drop into PSE10 just fine.
    Help appreciated.

    It means just what it says ... you can't create a lightroom catalog on a server. It must be on a local hard drive (External or internal)

  • Finder data not being written to .DS_Store file

    The symptom of the problem I am experiencing is that the position of icons on my desktop (as well as icons in other folders in my home directory, as well as Finder window settings for folder in my home directory) are lost every time I log out. On a subsequent login, all of the icons "reflow" starting from the top right corner of the screen, just below the boot volume icon. I have been able to determine that this has to do with the .DS_Store file(s) in the Desktop folder and other folders of my home directory. If a .DS_Store file exists, the timestamp does not change when I change the icon layout and logout. If I delete the .DS_Store file, it is not re-created.
    In my case, my home directory (and the child Desktop folder) is being mounted off an SMB share on a Windows 2003 server. My Mac is using AD authentication to the same Windows 2003 server, which is the domain controller. I'm logging in with my AD credentials, and my home directory mapping is picked up from the home directory field in AD.
    Now, Googling this problem, I found a lot of people complaining about wanting to suppress the use/creation of the .DS_Store files on Windows network volumes. This led to an Apple KB article (http://support.apple.com/kb/HT1629) on how to modify a default to prevent the creation of .DS_Store files on network volumes--essentially the very behavior I am experiencing. The upshot of the KB article is to us the following command in terminal:
    *defaults write com.apple.desktopservices DSDontWriteNetworkStores true*
    I did a 'defaults read' to confirm this default isn't set on my install of Mac OS X 10.5.6--and it isn't. I then tried using the following variation in the hope I could force the behavior I wanted:
    *defaults write com.apple.desktopservices DSDontWriteNetworkStores false*
    Predictably, this had to effect.
    The upshot is, NONE of the Finder data for files and folders in my home directory (icon positions, Finder view window defaults, etc.) is preserved between logons. And this is driving me nuts! I've spent several hours over the past two evening trying to troubleshoot this, and I've had no luck.
    As a footnote, I'll mention that if I drag a folder from my home directory onto the local hard disk, the .DS_Store file is created/updated and things behave as expected (icon position and Finder window defaults are preserved). But, if I then drag the folder back over to my home directory, the values in the .DS_Store file become "frozen" and don't change.

    Hey, try this:
    1.
    Put a file in a folder on your Desktop.
    Edit: not your Desktop, but rather a typical local home on an HFS+ volume ~/Desktop
    2.
    Use "Get Info" to add a "Comment".
    The comment can be up to somewhere between 700 and 800 characters.
    3.
    Copy the folder to a FAT formatted flash drive, SMB share, etc.
    4.
    Create a new folder on the flash drive.
    5.
    Move the file from the first folder to the second folder.
    Confirm that the "Finder" shows the comment to be there.
    6.
    Quit the "Finder" - in 10.4 and earlier, this action would ensure that all .DS_Store files get updated, including transfer of comments. I.e. don't "relaunch" or "kill", etc. Enable and user the "Quit" menu, or use:<pre>
    osascript -e 'tell application "Finder" to quit</pre>
    7.
    Now where's the comment?
    In step 2, you could also have just created a file on the non-HFS volume and wasted your time typing in a 700 character comment.
    In step 6, a more real-world scenario is ejecting the drive or logging out, but deliberately quitting the "Finder" is the most conservative way to ensure that changes get written to .DS_Store files and comments survive. In 10.5.6, even under these conditions, comments are lost.
    Icon positions and view preferences are one thing, but with comments - that's real user-inputted data that Apple is playing fast and loose with. And if they no longer support comments on non-HFS volumes, the "Finder" shouldn't be showing you the comments when you double-check them in step 5, or allow them to be added in the alternate version of step 2.
    ..."C'mon Apple... what gives here?"...
    Unfortunately, this "Discussions" site is not frequented by Apple devs. Have you considered filing a bug report? I wonder what they would tell you, eg. if the treatment of .DS_Store files actually does turn out to be a bug or is intentional as it suspiciously seems...
    http://developer.apple.com/bugreporter/index.html

  • MBR not being written to during install of RT to a Desktop Computer.

    Hello,
    We have been trying on and off for months in order to get LabView RT to load
    onto a desktop computer and have that computer boot up directly from the hard
    drive we installed RT on. First we go through the HD format disk procedure, and
    it doesn’t give any errors. When we reboot the computer after the install, we
    can not boot from the hard drive. The hard drive still seems to have Grub (a
    boot loader that was used previously on this hard drive to run Linux) since it
    spits out Grub errors while trying to boot. I checked the boot sector of the
    hard drive, and that seems to have some LabView content on there, and the
    partition where the LVRT is seems to have the files that should be there as
    well.
    We formatted the hard drive using partition Magic, and set the disk up to have
    one primary partition that was set to active, and formatted as FAT32. We did
    not specify a drive letter for the partition while creating it since C:\ was
    already being used on the computer where we were formatting the disk, and I
    know that LVRT uses drive C:\ as well... so we didn't want any conflicts.
    After all this… no errors during the install.. and still we
    can not boot from the hard drive. Instead, we boot using the boot disk, which
    seems to work fine, but we would really like to be able to boot from the hard
    drive and take the floppy out of the computer that we are using.
    Has anyone else encountered any problems like this before? I
    have read some of the forum posts about people having the same type of issue,
    but they were able to set the partition as Active and everything  worked out, but that was not the case for us.
    Any help is greatly appreciated since calling NI support
    doesn’t really seem to lead anywhere, and some of them seem like they don’t
    know too much about hard drives (which is understandable, since this is a very
    small portion of what they do)
    Thanks
    -Mark

    Trey,
    The drive that I have formatted was the IDE drive I was using previously for
    the RTOS that was not booting up on its own, but was running when we used the
    boot disk.
    I ran the PC Eval before the format, and everything came back successful.
    I deleted the partition, created a new one, and formatted
    the disk again to FAT32 via Disk Management in Windows XP Pro (just to start
    clean). After that I ran CHKDSK, and no errors were reported. I then booted
    using the PC Eval disk, and this time I got this error:
    WARNING: Flush from cache to disk Failed
    I’m not sure why suddenly I have this error, since all I did
    was reformat the disk. What should the partition setup as anyways… I have never
    seen that mentioned anywhere. Is it supposed to be a logical or primary? Should
    it be set to active?
    I will try different combinations since somehow it was
    working before, so there must be a way to get it formatted correctly. I’ll let
    you know which one seems to work.

  • Messages not being written to page

    HI,
    What could stop messages from being displayed/written. I have changed my page template from a table based layout to a css layout and have wrapped the following #GLOBAL_NOTIFICATION# #SUCCESS_MESSAGE# #NOTIFICATION_MESSAGE# in a div. I cause an error intentionally on my page but the message does not appear. When I inspect the page source it confirms that the message was not written to the page. I have an identical application that uses table layouts for the page template and the same process produces a message. When I inspect the page source the message is written in the #GLOBAL_NOTIFICATION# #SUCCESS_MESSAGE# #NOTIFICATION_MESSAGE# position as expected.
    Would anybody have any ideas for me to pursue to get to the bottom of this problem?
    Thank You
    Ben

    Scott,
    I found what the problem was ...
    The page template has a subtemplate region and the entry for Success Message and Notification where empty.
    Ben
    Edited by: Benton on Jun 18, 2009 2:09 PM

  • Audit Reports not being generated

    OIM is not processing any audit reports.
    ERROR [XELLERATE.AUDITOR] Error while processing audit message
    java.lang.NullPointerException
    It didn't process any records till now. The AUD_JMS table has thousands of records.
    My version: 9.1.0.1865.28
    Any suggestions?

    Seems like Article ID: 864906.1
    Cause
    The cause of this problem is, the entries in aud_jms table which were not processed
    after run "Issue Audit Messages Task" schedule task are not complete audit entries.
    Solution
    1. Run the following query.
    select IDENTIFIER from aud_jms where aud_class='UserProfileAuditor'
    IDENTIFIER is the usr_key.
    Save the output of above query into text file. lets say a.txt, the a.txt will looks like........
    a.txt
    122896 122896 122997 122999 122999
    122986
    122997
    2. Delete the all entries (rows ) from aud_jms table
    3. Run the GenerateSnapshot.sh/GenerateSnapshot.bat using following parameter.
    GenerateSnapshot.bat -inputFile E:\OIM9101\XLServer\xellerate\bin\a.txt -missingSnapshots -username xelsysadm -password xelsysadm -numOfThreads 2
    Where E:\OIM9101\XLServer\xellerate\bin\a.txt is the location of text file which is created in Step1.
    4. Run the "Issue Audit Messages Task" schedule task

Maybe you are looking for

  • Songs copy to IPOD but do not show up in Menu?

    Hi i was wondering if anyone could help me with this or heard about this before, I am managing to copy songs to my IPOD via itunes, however when i turn it on the songs are not there, however under playlists there is a blank playlist with nothing in i

  • How do i transfer library (including play counts etc.) to my new computer?

    I want to keep all the info like play counts, what is "checked" and unchecked, and star ratings. This wouldn't happen if i just loaded the songs would it? My new computer doesn't have iTunes installed yet. I have all the related files from my old com

  • Itunes cannot run because some of the required files are missing?

    This morning when I turned on my computer my itunes was suddenly not working. When I double click the desktop icon to open it up, I get an error message that reads: iTunes cannot run because some of the required files are missing. Please reinstall iT

  • Trailing Underscore added to folder name

    In using Visio 2013 to create an HTM file from a VSD, a folder is also created, containing many little files used to display the HTM in a browser. Example. VSD named DaleTest, HTM file named DaleTest.HTM, folder named DaleTest_files. I am working rem

  • How to skip a particular node in a script using test manager

    Hi, We are testing a script recorded for custom forms in R12 on two different instances using test manager.For one instance its running fine. But in the second instance one message note is not coming (one note which was present during recording is no