Automation of jobs

Hi All
I have a oracle database version--10.1.0.5.0 on Windows platform.
I have scheduled Export and RMAN for every day.
Now i want to get email notification on the above job completion.
Can anyone have the script which he can share with me for this.
Thanks & Regards,
Ravi

How did you schedule the jobs ? OEM ? DBMS_SCHEDULER ? Windows scheduler ?
OEM can email when jobs complete. You can call UTL_MAIL to send an email from plsql. You can send email from a bat file.

Similar Messages

  • Automating Complex jobs - advice needed (all are welcome)

    Hi all,
    We have successfully implemented XMl workflow with fully automated through script which will place all tables and images as per citation etc. and its working fine (this jobs are one time script writing since its a same style and same layout) and its a magazine & journals.
    Now we are concentration to automate the book we know that its not one time script since each book has different elements and styles and boxes.
    Here what we need a adivce from the scripting guys how to tackle these types of projects.
    The projects which we have is high complex jobs lots of boxes (each box has its own design). We are going to take the book projects in XML workflow using DOCbook.
    All the boxes are placed in the library, we have a question if we placed the boxes styles in library is the script is capable of draging the appropriate boxes from library and place the text automatically using script. Since we did't try using the library.
    Sorry I am through ideas which we have anybody who came across with the complex jobs automation will give some ideas of how to tackle these types of projects.
    Thanks...................
    Kavya

    Do you want "general" advice or something more specific for your project?
    Generally, when I have a large project, I like to break it down to smaller components. I script one simple action to make sure it all works. Then I try another part of the job and make sure all those commands work. Once I have about 60% of the core functions worked out then I start to combine them into a workflow application (I use XCode to develop Applescript applications). In each step I make sure to code it for flexibility for future changes.
    As for dealing with library objects like  you mention, I have not tried to work with libraries. You should make sure that you can script the library objects, if that is how you are going to fill items or build a document.
    Chris

  • Automated Batch Jobs for Recurring / Reversal Postings

    Is there any way to automatically run the batch job on a monthly cycle to generate the postings for recurring and / or reversal entries for accruals?  Right now the manual way to generate postings for recurring entries is F.14 and for reversals is FB08 / F.80.  Maybe there's some settings within these t-code themselves that lets you automate the run of these batch jobs?  Any assistance would be greatly appreciated, thanks.

    Workflow is the only option

  • Automating Deleting Job Runs?

    Is there a way of deleting jobs that have run after a certain period of time / status, other than manual? IE. Is there an API I could use to do this? Or a script?
    Thanks,
    BradW

    Found this in the OEM Online Help..
    The Enterprise Manager default purge policy for jobs deletes all finished jobs that are older than 30 days. This value cannot be changed using Enterprise Manager, but it can be changed using SQL*Plus.
    To change the default time period (for example, to 60 days), use SQL*Plus to log into the Management Repository database as the Enterprise Manager repository owner (SYSMAN). The default purge policy is called SYSPURGE_POLICY. To change the time period, simply drop and re-create the policy with a different time frame:
    SQL> execute MGMT_JOBS.drop_purge_policy('SYSPURGE_POLICY');
    SQL> execute MGMT_JOBS.register_purge_policy('SYSPURGE_POLICY', 60, null);
    SQL> COMMIT;
    The actual purging of jobs is implemented by a DBMS job that runs once a day. When the job runs, it will look for and delete finished jobs that are the specified number of days older than the current time. (The current time refers the current time with respect to the Management Repository database). The actual time that the job runs may vary with each Enterprise Manager installation. To determine this time for your Enterprise Manager installation, use SQL*Plus to connect to the Management Repository database using the SYSMAN account. Then perform the following queries:
    SQL> alter session set nls_date_format='mm/dd/yy hh:mi:ss pm';
    (Optional: this will format the date appropriately)
    SQL> select what, next_date from user_jobs;
    In the value for the ‘WHAT’ column, look for the MGMT_JOB_ENGINE.apply_purge_policies job, as shown below. The job will run at the same time each day. In this example, the purge policy job will run every day at 11:45:21 am, repository time:
    WHAT
    NEXT_DATE
    MGMT_JOB_ENGINE.apply_purge_policies();
    12/09/03 11:45:21 am

  • Automated Monitoring Scheduled Job not completing - PC

    Hello Experts,
    When we have scheduled a automated monitoring job, it is not getting executed completely and the status is shown as "In Progress". Also when we open the "Job Step Log" tab of the scheduled job in Automated Monitoring it is blank
    What could be the reason for it?
    Regards,
    Ramakrishna Chaitanya

    Hi
    Try with Sox export role. Asynchronous mode?

  • Write to AWS job log

    How can I write to the automation server job log file from my EDK AWS? This AWS does more then just return Groups and Users and I need to write to the log file the results of the additional operations.
    Please let me know how to implements this.

    Currently, the protocol between the portal and the remote web services only permits writing custom log messages from a crawler, but not from an authentication service or a profile service.
    We are considering this feature for future releases; in the meantime, you may want to follow the pattern we use for our own products, which use Log4N or Log4J to write a remote log file. In the case of a critical error that the administrator needs to know about, throw an exception with a message; in 5.0.1 this message did not get put in the job log, but in 5.0.2 it does.

  • Running a Dynpro-based Report as a Job/in Background mode

    Hello,
    i've got a report which hasn't got a selection-screen as its startscreen, but a complex dynpro and is based on different start-buttons and not only the "basic" F8/Run-Button. the users still would like to be able to run the report in background mode and as an automated weekly job. moreover they want to use their own configuration/variant for date-fields etc., just like they are used from a selection-screen. is this possible "out of the box" with a dynpro-based report or how can i archive this through my own programming logic?
    thanks for your help,
    dsp

    Hi,
    I guess yes, but since you seem to have several processes possible at startup (those buttons), you will have to code a new bit of code at the start of your application. This to choose the correct process... you should have a new statement like
    IF sy-batch IS NOT INITIAL.
         "Perform batch process
    ELSE.
         "Perform normal run
    ENDIF.
    For the variant, I'm not sure to really get the idea... Do the users already use variants with the actual version? or is there no selection screen at all? If not, you should use one and set-up parameters to pre-fill your dynpro fields...
    Kr,
    Manu

  • Scheduled Job to gather stats for multiple tables - Oracle 11.2.0.1.0

    Hi,
    My Oracle DB Version is:
    BANNER Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE 11.2.0.1.0 Production
    TNS for Linux: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    In our application, we have users uploading files resulting in insert of records into a table. file could contain records ranging from 10000 to 1 million records.
    I have written a procedure to bulk insert these records into this table using limit clause. After the insert, i noticed my queries run slow against these tables if huge files are uploaded simultaneously. After gathering stats, the cost reduces and the queries executed faster.
    We have 2 such tables which grow based on user file uploads. I would like to schedule a job to gather stats during a non peak hour apart from the nightly automated oracle job for these two tables.
    Is there a better way to do this?
    I plan to execute the below procedure as a scheduled job using DBMS_SCHEDULER.
    --Procedure
    create or replace
    PROCEDURE p_manual_gather_table_stats AS
    TYPE ttab
    IS
        TABLE OF VARCHAR2(30) INDEX BY PLS_INTEGER;
        ltab ttab;
    BEGIN
        ltab(1) := 'TAB1';
        ltab(2) := 'TAB2';
        FOR i IN ltab.first .. ltab.last
        LOOP
            dbms_stats.gather_table_stats(ownname => USER, tabname => ltab(i) , estimate_percent => dbms_stats.auto_sample_size,
            method_opt => 'for all indexed columns size auto', degree =>
            dbms_stats.auto_degree ,CASCADE => TRUE );
        END LOOP;
    END p_manual_gather_table_stats;
    --Scheduled Job
    BEGIN
        -- Job defined entirely by the CREATE JOB procedure.
        DBMS_SCHEDULER.create_job ( job_name => 'MANUAL_GATHER_TABLE_STATS',
        job_type => 'PLSQL_BLOCK',
        job_action => 'BEGIN p_manual_gather_table_stats; END;',
        start_date => SYSTIMESTAMP,
        repeat_interval => 'FREQ=DAILY; BYHOUR=12;BYMINUTE=45;BYSECOND=0',
        end_date => NULL,
        enabled => TRUE,
        comments => 'Job to manually gather stats for tables: TAB1,TAB2. Runs at 12:45 Daily.');
    END;Thanks,
    Somiya

    The question was, is there a better way, and you partly answered it.
    Somiya, you have to be sure the queries have appropriate statistics when the queries are being run. In addition, if the queries are being run while data is being loaded, that is going to slow things down regardless, for several possible reasons, such as resource contention, inappropriate statistics, and having to maintain a read consistent view for each query.
    The default collection job decides for each table based on changes it perceives in the data. You probably don't want the default collection job to deal with those tables. You probably do want to do what Dan suggested with the statistics. But it's hard to tell from your description. Is the data volume and distribution volatile? You surely want representative statistics available when each query is started. You may want to use all the plan stability features available to tell the optimizer to do the right thing (see for example http://jonathanlewis.wordpress.com/2011/01/12/fake-baselines/ ). You may want to just give up and use dynamic sampling, I don't know, entire books, blogs and papers have been written on the subject. It's sufficiently advanced technology to appear as magic.

  • Payment Proposals (F110) did not created in Batch job

    Hello,
    We had 1 time issue today. I am just trying to find out if anyone come across this issue.
    We have the daily  automated batch jobs which kickks of Payment proposals. Somehow system did not create any proposals and showd as  Zero incoming Payments for customers.  Busiiness users had some doubt as this is not possible and created the manual proposals in 15 minutes and a big payment proposals were created. I am trying to find the cause of this issue. Please let me know if anyone come across this issue.
    I did all analysis on business point of view and all invoices were created few days back.
    Thanks
    ID

    Hi,
    Thanks for your response. The batch jobs are correctly maintained and as I mentioned this happened 1 time and the same batch runs every day and the issue did not occur
    Thanks
    Ivan

  • Duplicate IR through parallel processing for automated ERS

    Hi,
    We got duplicate IR issue in production when running the parallel processing for automated ERS job. This issue is not happening in every time. Once in a while the issue happeing. That means the issue has happened in June month as twice. What could be the reasons to got this issue. On those days the job took more time comaredt o general. We are unable to replicate the same scenareo. When i am testing the job is creating IRs successfully. Provide me the reasons for this.

    Wow - long post to say "can I use hardware boxes as inserts?" and the answer is yes, and you have been able to for a long time.
    I don't know why you're doing some odd "duplicated track" thing... weird...
    So, for inserts of regular channels, just stick Logic's I/O plug on the channel. Tell it which audio output you want it to send to, and which audio input to receive from. Patch up the appropriate ins and outs on your interface to your hardware box/patchbay/mixer/whatever and bob's your uncle.
    You can also do this on aux channels, so if you want to send a bunch of tracks to a hardware reverb, you'd put the I/O plug on the aux channel you're using in the same way as described above. Now simply use the sends on each channel you want to send to that aux (and therefore hardware reverb).
    Note you'll need to have software monitoring turned on.
    Another way is to just set the output of a channel or aux to the extra audio outputs on your interface, and bring the outputs of your processing hardware back into spare inputs and feed them into the Logic mix using input objects.
    Lots of ways to do it in Logic.
    And no duplicate recordings needed...
    I still don't understand why the Apple-developers didn't think of including such a plug-in, because it could allow amazing routing possibilities, like in this case, you could send the audio track to the main output(1-2 or whatever) BUT also to alternate hardware outputs, so you can use a hardware reverb unit, + a hardware delay unit etc...to which the audio track is being sent , and then you could blend the results back in Logic more easily.
    You can just do this already with mixer routing alone, no plugins necessary.

  • Automator crashes when I try to run a specific workflow..

    Hi there,
    I've been trying to add a workflow to Automator in Lion to Get Contents of Clipboard and then run the action Text to Audio File.  It crashes with the console output below.  I've tried sourcing the text from elsewhere such as the frontmost TextEdit window and this works fine.  Any ideas?
    5/09/11 1:14:20.709 PM Automator: -[NSConcreteAttributedString getCharacters:range:]: unrecognized selector sent to instance 0x402618be0
    5/09/11 1:14:20.710 PM Automator: An uncaught exception was raised
    5/09/11 1:14:20.710 PM Automator: -[NSConcreteAttributedString getCharacters:range:]: unrecognized selector sent to instance 0x402618be0
    5/09/11 1:14:20.710 PM Automator: (
              0   CoreFoundation                      0x00007fff8a59a986 __exceptionPreprocess + 198
              1   libobjc.A.dylib                     0x00007fff8a0e6d5e objc_exception_throw + 43
              2   CoreFoundation                      0x00007fff8a6265ae -[NSObject doesNotRecognizeSelector:] + 190
              3   CoreFoundation                      0x00007fff8a587803 ___forwarding___ + 371
              4   CoreFoundation                      0x00007fff8a587618 _CF_forwarding_prep_0 + 232
              5   CoreFoundation                      0x00007fff8a50e3ab CFStringGetCharacters + 139
              6   SpeechDictionary                    0x000000010865a5e6 _ZN20SLCFStringTextSource6RefillERPtS1_PKt + 270
              7   SpeechDictionary                    0x0000000108659cef _ZN15SLLexerInstance6RefillEi + 33
              8   SpeechDictionary                    0x000000010866e191 _ZN11SLLexerImpl9NextTokenEv + 633
              9   SpeechDictionary                    0x000000010865a0e7 _ZN13SLLexerBufferixEm + 71
              10  SpeechDictionary                    0x000000010866bcdd _ZN15SLPostLexerImpl9NextTokenEv + 43
              11  SpeechDictionary                    0x00000001086acfba _ZN17SLStemTrackerImpl9NextTokenEv + 42
              12  SpeechDictionary                    0x000000010865a0e7 _ZN13SLLexerBufferixEm + 71
              13  SpeechDictionary                    0x0000000108655e24 _ZN12SLTuplesImpl9NextTokenEv + 62
              14  MacinTalk                           0x0000000108559a5e _ZN11MTFEBuilder9PeekTokenEv + 34
              15  MacinTalk                           0x000000010855985a _ZN11MTFEBuilder13ParseSentenceEv + 44
              16  MacinTalk                           0x000000010855965d _ZN14MT3BEngineTask15ParseNextPhraseEPv + 429
              17  MacinTalk                           0x00000001085593af _ZN10MTBEWorker12ExecuteTasksEv + 321
              18  libdispatch.dylib                   0x00007fff894612f1 _dispatch_source_invoke + 614
              19  libdispatch.dylib                   0x00007fff8945dfc7 _dispatch_queue_invoke + 71
              20  libdispatch.dylib                   0x00007fff8945e124 _dispatch_queue_drain + 210
              21  libdispatch.dylib                   0x00007fff8945dfb6 _dispatch_queue_invoke + 54
              22  libdispatch.dylib                   0x00007fff8945d7b0 _dispatch_worker_thread2 + 198
              23  libsystem_c.dylib                   0x00007fff8ed153da _pthread_wqthread + 316
              24  libsystem_c.dylib                   0x00007fff8ed16b85 start_wqthread + 13
    5/09/11 1:14:20.710 PM Automator: *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[NSConcreteAttributedString getCharacters:range:]: unrecognized selector sent to instance 0x402618be0'
    *** First throw call stack:
              0   CoreFoundation                      0x00007fff8a59a986 __exceptionPreprocess + 198
              1   libobjc.A.dylib                     0x00007fff8a0e6d5e objc_exception_throw + 43
              2   CoreFoundation                      0x00007fff8a6265ae -[NSObject doesNotRecognizeSelector:] + 190
              3   CoreFoundation                      0x00007fff8a587803 ___forwarding___ + 371
              4   CoreFoundation                      0x00007fff8a587618 _CF_forwarding_prep_0 + 232
              5   CoreFoundation                      0x00007fff8a50e3ab CFStringGetCharacters + 139
              6   SpeechDictionary                    0x000000010865a5e6 _ZN20SLCFStringTextSource6RefillERPtS1_PKt + 270
              7   SpeechDictionary                    0x0000000108659cef _ZN15SLLexerInstance6RefillEi + 33
              8   SpeechDictionary                    0x000000010866e191 _ZN11SLLexerImpl9NextTokenEv + 633
              9   SpeechDictionary                    0x000000010865a0e7 _ZN13SLLexerBufferixEm + 71
              10  SpeechDictionary                    0x000000010866bcdd _ZN15SLPostLexerImpl9NextTokenEv + 43
              11  SpeechDictionary                    0x00000001086acfba _ZN17SLStemTrackerImpl9NextTokenEv + 42
              12  SpeechDictionary                    0x000000010865a0e7 _ZN13SLLexerBufferixEm + 71
              13  SpeechDictionary                    0x0000000108655e24 _ZN12SLTuplesImpl9NextTokenEv + 62
              14  MacinTalk                           0x0000000108559a5e _ZN11MTFEBuilder9PeekTokenEv + 34
              15  MacinTalk                           0x000000010855985a _ZN11MTFEBuilder13ParseSentenceEv + 44
              16  MacinTalk                           0x000000010855965d _ZN14MT3BEngineTask15ParseNextPhraseEPv + 429
              17  MacinTalk                           0x00000001085593af _ZN10MTBEWorker12ExecuteTasksEv + 321
              18  libdispatch.dylib                   0x00007fff894612f1 _dispatch_source_invoke + 614
              19  libdispatch.dylib                   0x00007fff8945dfc7 _dispatch_queue_invoke + 71
              20  libdispatch.dylib                   0x00007fff8945e124 _dispatch_queue_drain + 210
              21  libdispatch.dylib                   0x00007fff8945dfb6 _dispatch_queue_invoke + 54
              22  libdispatch.dylib                   0x00007fff8945d7b0 _dispatch_worker_thread2 + 198
              23  libsystem_c.dylib                   0x00007fff8ed153da _pthread_wqthread + 316
              24  libsystem_c.dylib                   0x00007fff8ed16b85 start_wqthread + 13
    5/09/11 1:14:20.710 PM [0x0-0x75075].com.apple.Automator: terminate called throwing an exception
    5/09/11 1:14:21.452 PM com.apple.launchd.peruser.501: ([0x0-0x75075].com.apple.Automator[2424]) Job appears to have crashed: Abort trap: 6
    5/09/11 1:14:21.594 PM ReportCrash: Saved crash report for Automator[2424] version 2.2 (329) to /Users/Steve/Library/Logs/DiagnosticReports/Automator_2011-09-05-131421_Stevens -MacBook-Pro.crash

    I think the suggestion would have been to update Microsoft Office, which is not available from the App Store. You need to get it from Microsoft.
    What you can do is run the Microsoft Auto Update utility. Open Finder and click on Go > Go to Folder. Then enter the following (copy and paste)
    /Library/Application Support/Microsoft/MAU2.0
    If you are running MS Office 2011 then you will see the Microsoft AutoUpdate utility. Double-click to open and get the latest update.

  • File Dependency not working properly

    Hi,
    In our project we have a job group which will move a set of file from one directory to another directory. The first job in that group will be run only when a file with an extension "ind" is present in the source directory, i.e the dependency is set to a file with extension *.ind(Please see attached screeshot for reference).
    But when we run the job the dependency is not being satisfied even though the file is present in the source directory.
    What could be the reason for this?
    Note: If I overide the job, then the files are properly griing moved. But since this is automated, the job run must be based on the dependency we have set.

    This sounds like the runtime user of the job is a different account than used by the agent.
    File Dependencies and File Events are evaluated by the agent process.  This means the account running the agent service must have access to the file.
    When the job runs it uses the runtime user.  If the runtime user is different account than the agent account you can encounter the problem you describe.
    If this is a Windows agent running as a Local System account, the agent will only have access to files local to the server.  So if the file is on another server the agent will not have access to it.
    If this isn't your issue, could you provide details about the agent (windows/unix), file location (local to agent/UNC path), and whether the agent running the job is the same as the agent being used to evaluate the file dependency.
    Thanks.

  • Logical corruption found in the sysaux tablespace

    Dear All:
    We lately see the logical corruption error when running dbverify command which shows the block corruption. It is always on the the sysaux tablespace. The database is 11g and platform is Linux.
    we get the error like:error backing up file 2 block xxxx: logical corruption and this comes to alert.log out of the automated maintenance job like sqltunning advisor running during maintenance window.
    Now As far as I know,we can't drop or rename the sysaux tablespace. there is a startup migrate option to drop the SYSAUX but it does not work due to the presence of domain indexes. you may run the rman block media recovery but it ends with not fixing since rman backups are more of physical than maintain the logical integrity.
    Any help, advise, suggestion will be highly appreciated.

    If you let this corruption there then you are likely to face a big issue that will compromise database availability sooner or later. The sysaux is a critical tablespace, so you must proceed with caution.
    Make sure you have a valid backup and don't do any thing unless you are sure about what you are doing and you have a fall back procedure.
    if you still have a valid backup then you can use rman to perform a db block level recovery, this will help you in fixing the block. Otherwise try to restore and recover the sysaux. In case you cannot fix the block by refreshing the sysaux tablespace then I suggest you to create a new database and use aTransportable Tablespace technique to migrate all tablespaces from your current database to the new one and get rid of this database.
    ~ Madrid
    http://hrivera99.blogspot.com

  • Oracle XE (windows) on drive other than C:\

    Hi,
    I have loaded XE on a windows 2008 server. I was restricted to installing it on the N: drive. Install went fine, and everything seems to be working. However i run XE as part of an automated build job. The job causes an XE problem (something like DB already started or something...) which the build job ignores, but XE wants to do some logging I think.
    So, I found the messages below in N:\oraclexe\app\oracle\admin\XE\bdump\alert_xe.log
    Errors in file n:\oraclexe\app\oracle\admin\xe\bdump\xe_dbw0_1624.trc:
    ORA-01157: cannot identify/lock data file 1 - see DBWR trace file
    ORA-01110: data file 1: 'C:\ORACLEXE\ORADATA\XE\SYSTEM.DBF'
    ORA-27041: unable to open file
    OSD-04002: unable to open file
    O/S-Error: (OS 3) The system cannot find the path specified.
    Thu Mar 24 11:08:45 2011
    Errors in file n:\oraclexe\app\oracle\admin\xe\bdump\xe_dbw0_1624.trc:
    ORA-01157: cannot identify/lock data file 2 - see DBWR trace file
    ORA-01110: data file 2: 'C:\ORACLEXE\ORADATA\XE\UNDO.DBF'
    ORA-27041: unable to open file
    OSD-04002: unable to open file
    O/S-Error: (OS 3) The system cannot find the path specified.
    So my question is do I have to set an environment variable so that nothing looks to C:\ or are these type of values (C:\ORACLEXE\ORADATA\XE) hard coded?
    Thanks for any info!
    Bernie

    Unless maybe an earlier install attempt left stuff in the registry ... if the database never got a successful startup on the N: files, installer didn't complete all its tasks, I think there's quite a few installer tasks after the instance is created.
    Take a look at the manual deinstall steps, its probably worth going through all the uninstall steps, especially the registry cleanup and give it another try...
    http://download.oracle.com/docs/cd/B25329_01/doc/install.102/b25143/toc.htm#BABFFJIB

  • EBS BAI2 file - Transaction 475, check clearing - extra character appending

    Hello experts,
    We are using Algorithm 13 for processing checks which are processed using the transactin 475 on the incoming bank file.
    In the BAI2 file format, the check numer is contained in record 16 in the following format-
    16,475,58740,0,9180914733,67689/
    88,CHECK NO=0000000067689
    The problem that we are having is - if the EBS program is run as a automated batch job then SAP is appending the '/'  (that appears at the end of the line in record 16) in the check number that it uses to match the documents for clearing. Therefore in the above example, our SAP system is trying to find the check number '67689/' in the check lots to match against the payment document. However our check numbers are just 67689 therefore SAP is not able to find a match and posts the document in 'On Account' state.
    But if the same file is run manually in the Workstation upload mode, then there is no extra '/' appearing in the check number and SAP correctly clears the Cash Clearing account. I know there is nothing wrong in the file sent to us by the bank. Plus running it manually does not cause this issue.
    Has anyone else faced this problem before and if you can suggest on what possibly could be going wrong then it would be a great help for me.
    Thanks!

    I understand wanting to find the issue - not just apply a band aid.  And I understand the frustration of not being able to recreate an issue in a test system.  But at least a search string is only configuration and not ABAP code in a user exit....  And if it works, it will relieve your users from clearing the checks manually.
    If you do go with a search string, you should be able to isolate the last 5 digits by only mapping those.  For example, you'd set the search string to: 
    CHECK NO=00000000#####
    Then in the mapping, blank out all values except for the 5 # symbols - so the mapping would be 17 blank spaces followed by 5 #'s.  I've been able to successfully extract reference numbers for ACH deposit clearings this way - I don't see why it wouldn't also work in your situation with check numbers.
    Regards,
    Shannon

Maybe you are looking for