Mds not running. Lazy indexing

I have been having problems with Finder and Spotlight recently. I've done a clean install and an archive install in the last three weeks but this hasn't helped.
Spotlight will only carry out a brief index not even nearly indexing my entire drive. The mds process is not running and I can't find the little expletive is hiding in order to manually start it up.
Finder crashes when I use More Info in the Inspector. I've verified disk permissions and repair both permissions and the startup disk itself.
I've tried to force a rebuild but it finishes far too quickly.
sudo mdutil -E /
is the command for those of you who haven't tried that option yet, does require root access of course.
Anyone else having similar problems? I've got no idea how to get OS X playing nicely again. In all honesty I feel like I've hit a brick wall.
All help is appreciated. Thanks fellow Mac-enthusiasts.
MBP   Mac OS X (10.4.10)   2.33GHz Core2Duo, 3GB, 256MB RadeonX1600, 120GB & 500GB LaCie d2.

I have fixed the issue myself using a sledgehammer approach as the feather-light touch did not help.
Here's an extract direct from my blog with all the info you need.
*Indicative Symptoms*
If Spotlight is not indexing your drive properly you may notice the search in Mail no longer finds any results. You may also notice Finder slows to a snail’s pace in column view and looking for more info in the Inspector does nothing short of induce a fatal hang within Finder.
*Feather-light Solutions*
Apple and other websites will tell you to put your entire drive in the Privacy section of Spotlight’s preference pane and then remove it. This did not work when I tried it as the index was totally corrupted. You may want to try this however as it has worked for some people.
I opted for using the command line to reindex as I’m a bit of a command line junkie. If you get an indexing status unknown error during this process you’ll need to proceed to the sledgehammer steps.
If you do not wish to use the command line for this step you will need to refer to http://www.info.apple.com/kbnum/n302223 for more information on fixing spotlight issues.
And for those of us who aren’t *nix nerds, but still want to use the command line, you will need to enable root user access to employ this fix. Apple have documentation on how to this at http://docs.info.apple.com/article.html?artnum=106290.
sudo mdutil -i off /
sudo mdutil -E /
sudo mdutil -i on /
These three commands, executed using root user permissions, allow you to firstly turn off indexing on the startup volume (the startup volume is mounted at / hence the forward slash at the end), clear the index completely using the -E flag, and then turn on indexing once again.
*Sledgehammer Steps*
These steps should only be used as a last resort as they go above an beyond the typical guidelines for repairing Spotlight, they did however work for me.
First you’re going to have to remove the main spotlight directories on your HDD using root permissions.
Then repair disk permissions and reboot.
The commands necessary are below. This is a _last resort_ and should not be attempted before a simple uninterrupted reindex.
sudo rm -r /.Spotlight-V100
sudo rm -r /Library/Spotlight
diskutil repairPermissions /
Alternatively, you can use Apple’s Disk Utility to repair permissions. The Disk Utility application can be found within your Utilities folder (/Applications/Utilities/Disk Utility.app.)
After carrying out these steps reboot your system. Once you’ve logged in DO NOT TOUCH SPOTLIGHT! Just leave your computer alone until the pulsing white circle within the Spotlight icon has disappeared or you will have achieved nothing.
Happy sudoing!
Message was edited by: James Conroy-Finn

Similar Messages

  • The index management service is not running

    Im in the process of creating a web repository  with the following version history
    Version History
    6.0.2.4.3_ContentManagement_Collaboration
    6.0.2.3.5.Enterprise_Portal_Service_Pack_2
    If finished creating the crawler profile, cache, website, sytem,repository.
    when i  go to index admin i get the error 'The index management service is not running '
    tried restarting the servlet engine, the trex server -the error still persists.
    is there any tool/note for correcting this.
    any help apreciated
    prasad badal

    Hello Karsten,
    thank You for your reply.
    We have a lot of entries in our knowledgement.log files as the following:
    #1.5#C000AC17158E0004000004AB005FCB470003E52BC02E9648#1096401000765#com.sapportals.wcm.service.indexmanagement.TaskQueueReaderTask#irj#com.sapportals.wcm.service.indexmanagement.TaskQueueReaderTask#System#0#####com.sapportals.wcm.scheduler##0#0#Error##Plain###null - java.lang.NullPointerException
         at com.sapportals.wcm.service.indexmanagement.TaskQueueReaderTask.run(TaskQueueReaderTask.java:33)
         at com.sapportals.wcm.service.scheduler.wcm.SchedulerEntry.run(SchedulerEntry.java:332)
         at com.sapportals.wcm.service.scheduler.wcm.Scheduler.run(Scheduler.java:367)
         at com.sapportals.wcm.util.factories.ThreadUtils$InternalFixedTimer.run(ThreadUtils.java:91)
         at java.lang.Thread.run(Thread.java:479)
    Do you know what to do?
    Thanks a lot

  • Attribute Change Run aborted with an error :Not all data indexed for index

    Hi,
    I have an issue where the master data Attribute Change Run  aborted with an error pointing to the BWA:
    Attribute Change Run aborted with an error :Not all data indexed for index
    we tried to load only 0 data into it,the error still occured.

    Detail message is following:
    not all data was indexed for index 'GBP_BIC:XYCH_098('0' for '114')
    creation of The BIA INDEX for infocube 'YBC_SD015' terminated while filling

  • MDS and MDWORKER processes not running

    Spotlight stopped working about two weeks ago. I noticed that the mds and mdworker processes are not running. Something corrupted maybe???
    Any ideas?

    Spotlight stopped working about two weeks ago. I noticed that the mds and mdworker processes are not running. Something corrupted maybe???
    Any ideas?

  • Indexing not running

    Noticed my outlook express would no longer do a search and was endlessly trying to index.....looked at indexing options through the control panel and am told "Indexing is not running"....but can't figure out how to turn on.
    Ideas?

    One solution is to delete all files with .BLF and .REGTRANS-MS extension in directory 
    C:\windows\system32\config\TxR 
    See http://support.microsoft.com/kb/2484025
    It did not work for me (I was unable to delete all files, which were recognizes as essential system files), but it did for others.
    The solution for me was: 
    Essential first step: disable Window Search service (not stop, but disable).
    Reboot
    Delete all files in C:\ProgramData\Microsoft\Search\Data\Applications\Windows
    Delete all files in C:\ProgramData\Microsoft\Search\Data\Temp
    (these are hidden files, so be sure that hidden files are made visible)
    Reboot
    Start Window Search service (delayed start).
    Reboot
    (I'm not sure if the last reboots are really necessary, but I did it just in case).
    Source: http://www.online-tech-tips.com/computer-tips/how-to-fix-microsoft-windows-search-indexer-stopped-wo... 

  • Mdworker not running

    OSX.6.8 Server
    Problem: Spotlight is not indexing. We deleted the index for reindexing and now its impossible to search anything.
    Reason: the process mdworker which is normallly launched with the launchd pocess during booting and is used by _spotlight, is not running.
    The process mds which is launched by launchd and used by root is running.
    I restarted the server, still silence with mdworker.
    With mdworker not running Spotlight simply does nothing.
    Any help is appreciated.

    TL;DR: Check the console logs for errors.
    Spotlight isn't specific to OS X Server. 
    Reinstalling OS X and OS X Server isn't a huge project and is a common solution for some variations of "weird" but wouldn't be my first choice here. 
    I'd start with a look at the console log files first and see what (if anything) is logged there around mdworker et al.
    I've seen cases where odd-ball or corrupted files can trip up Spotlight, though finding those files can sometimes be a little interesting.   If it's a user's file that's tripping Spotlight, reinstalling OS X Server doesn't help.
    Later versions of Spotlight tend to work a little better, though I'm guessing there's a reason you're still at 10.6.8.

  • The background thread running lazy writer encountered an I/O error

    Hi I have a test server which has thrown the following error
    File system error: A system error occurred while attempting to read or write to a file store. The system might be under memory pressure or low on disk space. Physical file: \\?\F:\MSAS11.DEPLOYAS\OLAP\Data\Prod_KCube.0.db\DIM Flags And Types.0.dim\3.Flag
    Types Key.khstore. Logical file: . GetLastError code: 8. File system error: The background thread running lazy writer encountered an I/O error. Physical file: \\?\F:\MSAS11.DEPLOYAS\OLAP\Data\Prod_KCube.0.db\DIM Flags And Types.0.dim\3.Flag Types Key.khstore.
    Logical file: . Errors in the OLAP storage engine: An error occurred while processing the 'Facts' partition of the 'Main Facts' measure group for the 'Prod_Cube' cube from the Prod_KCube database.
    The cube sits on a not very well maintained server which is used by various users (it is a test server) with the following specs
    Intel(R) Xenon(R) CPU x5690 @3.47GHz
    24GB Ram
    64 Bit operating system.
    The Cube data and logs are on separate drives and have plenty data but the C drive (where SQL Server is installed) only has3.5Gb of space left.
    It's a fairly big cube and I've managed to get it running by processing dimensions and facts bit by bit but errors when processed all together.
    What could be causing the errors above?

    Hi aivoryuk,
    According to your description, you get the lazy writing error when processing partitions. Right?
    In this scenario, the issue may cause by low memory for SSAS and lack of disk space. Please consider configure
    Server Properties (Memory Page) and increase
    memory setting for SSAS. If the .cub file is located in C drive, please reserve more disk space.
    Please refer to a similar thread below:
    https://social.msdn.microsoft.com/Forums/sqlserver/en-US/21bf84c5-f89a-464a-a5f1-2649fae5eb1e/while-processing-large-cubes-various-file-system-errors-the-background-thread-running-lazy-writer?forum=sqlanalysisservices
    Best Regards,
    Simon Hou
    TechNet Community Support

  • Applet does not run in Browser

    Iam using Windows XP and IE 6.0. Applets do not run in my browser. Hovering the cursor where the applet should be in the browser, I get a Class not found message. Setting the CLASSPATH variable did not help. Any suggestions?

    You might find an answer in this new page
    [url http://java.sun.com/j2se/1.4.2/docs/guide/deployment/deployment-guide/upgrade-guide/index.html]Java Upgrade Guide: Migrating From the Microsoft JVM to the Sun JRE

  • What are the symptons of macbook pro hard drive not running fast enough

    what are the symptons of the hard drive not running fast enough

    Physically the drive will be as fast as any other of its class, and if not then it's experiencing a mechanical issue that should prompt a warning or two in the system, especially if you use a drive management tool like Disk Utility that checks the drive's built-in "S.M.A.R.T." diagnostics.
    As for "logical speed," if there are bad blocks on the drive where sectors cannot be read or written to because of a breakdown of the storage medium, then this can result in a program or the entire system crashing or hanging (generally the latter is the case), where it usually pauses for a number of seconds at a time. This is usually accompanied by messages that state "I/O Error" or other similar warning in the OS X Console (in the Applications > Utilities folder).
    Beyond this, drive formatting errors such as might happen after a crash or power outage can result in similar hang-like behavior, though this is usually not as distinct of a hanging behavior as is seen with bad blocks.
    Finally, if the drive is simply full of files or highly fragmented (not as much of a problem in OS X, but it can happen), then this can reduce access time for reading and writing files, and prevent the system from optimizing the use of RAM, which can result in overall slowdowns. Heavy drive use can also cause similar bottlenecks, but this is ususally only an intermittent issue during a time when you are performing a heavy file transfer or copying process, or other task that occupies access to the drive's index and formatting (e.g., if you check it for errors with Disk Utility, then you may see the system pause or slow down during the check).

  • Pyzor Not Running Question

    I am thinking this could be a permission issue.
    I have installed the pyzor files as listed at- http://wiki.apache.org/spamassassin/SpamAssassin_on_Mac_OS_X_Server
    There were no issues with the install but when I run spamassassin -D --lint I see the following
    [7661] dbg: pyzor: pyzor is not available: no pyzor executable found
    [7661] dbg: pyzor: no pyzor found, disabling Pyzor
    Again during the test it found the gdbm. The only thing I can think of is a permission issue. Not sure if anyone else has run across this.
    Oh and here is the entire log from the spamassassin -D --lint
    xserve1:~ tope$ sudo su clamav -c "spamassassin -D --lint"
    [7661] dbg: logger: adding facilities: all
    [7661] dbg: logger: logging level is DBG
    [7661] dbg: generic: SpamAssassin version 3.1.5
    [7661] dbg: config: score set 0 chosen.
    [7661] dbg: util: running in taint mode? yes
    [7661] dbg: util: taint mode: deleting unsafe environment variables, resetting PATH
    [7661] dbg: util: PATH included '/opt/local/bin', keeping
    [7661] dbg: util: PATH included '/opt/local/sbin', keeping
    [7661] dbg: util: PATH included '/bin', keeping
    [7661] dbg: util: PATH included '/sbin', keeping
    [7661] dbg: util: PATH included '/usr/bin', keeping
    [7661] dbg: util: PATH included '/usr/sbin', keeping
    [7661] dbg: util: final PATH set to: /opt/local/bin:/opt/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin
    [7661] dbg: message: ---- MIME PARSER START ----
    [7661] dbg: message: main message type: text/plain
    [7661] dbg: message: parsing normal part
    [7661] dbg: message: added part, type: text/plain
    [7661] dbg: message: ---- MIME PARSER END ----
    [7661] dbg: dns: is Net::DNS::Resolver available? yes
    [7661] dbg: dns: Net::DNS version: 0.59
    [7661] dbg: diag: perl platform: 5.008006 darwin
    [7661] dbg: diag: module installed: Digest::SHA1, version 2.10
    [7661] dbg: diag: module installed: DB_File, version 1.814
    [7661] dbg: diag: module installed: HTML::Parser, version 3.36
    [7661] dbg: diag: module installed: MIME::Base64, version 3.05
    [7661] dbg: diag: module installed: Net::DNS, version 0.59
    [7661] dbg: diag: module installed: Net::SMTP, version 2.29
    [7661] dbg: diag: module installed: Mail::SPF::Query, version 1.999001
    [7661] dbg: diag: module installed: IP::Country::Fast, version 604.001
    [7661] dbg: diag: module installed: Razor2::Client::Agent, version 2.82
    [7661] dbg: diag: module installed: Net::Ident, version 1.20
    [7661] dbg: diag: module not installed: IO::Socket::INET6 ('require' failed)
    [7661] dbg: diag: module installed: IO::Socket::SSL, version 1.0
    [7661] dbg: diag: module installed: Time::HiRes, version 1.68
    [7661] dbg: diag: module installed: DBI, version 1.52
    [7661] dbg: diag: module installed: Getopt::Long, version 2.34
    [7661] dbg: diag: module installed: LWP::UserAgent, version 2.033
    [7661] dbg: diag: module installed: HTTP::Date, version 1.47
    [7661] dbg: diag: module installed: Archive::Tar, version 1.30
    [7661] dbg: diag: module installed: IO::Zlib, version 1.04
    [7661] dbg: ignore: using a test message to lint rules
    [7661] dbg: config: using "/etc/mail/spamassassin" for site rules pre files
    [7661] dbg: config: read file /etc/mail/spamassassin/init.pre
    [7661] dbg: config: read file /etc/mail/spamassassin/v310.pre
    [7661] dbg: config: read file /etc/mail/spamassassin/v312.pre
    [7661] dbg: config: using "/usr/local/share/spamassassin" for sys rules pre files
    [7661] dbg: config: using "/usr/local/share/spamassassin" for default rules dir
    [7661] dbg: config: read file /usr/local/share/spamassassin/10_misc.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/20_advance_fee.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/20_anti_ratware.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/20_body_tests.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/20_compensate.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/20_dnsbl_tests.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/20_drugs.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/20_fake_helo_tests.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/20_head_tests.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/20_html_tests.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/20_meta_tests.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/20_net_tests.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/20_phrases.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/20_****.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/20_ratware.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/20_uri_tests.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/23_bayes.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/25_accessdb.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/25_antivirus.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/25_body_tests_es.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/25_body_tests_pl.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/25_dcc.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/25_dkim.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/25_domainkeys.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/25_hashcash.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/25_pyzor.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/25_razor2.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/25_replace.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/25_spf.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/25_textcat.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/25_uribl.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/30_text_de.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/30_text_fr.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/30_text_it.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/30_text_nl.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/30_text_pl.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/30_text_pt_br.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/50_scores.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/60_awl.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/60_whitelist.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/60_whitelist_dk.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/60_whitelist_dkim.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/60_whitelist_spf.cf
    [7661] dbg: config: read file /usr/local/share/spamassassin/60_whitelist_subject.cf
    [7661] dbg: config: using "/etc/mail/spamassassin" for site rules dir
    [7661] dbg: config: read file /etc/mail/spamassassin/70_sare_adult.cf
    [7661] dbg: config: read file /etc/mail/spamassassin/70_sare_bayes_poison_nxm.cf
    [7661] dbg: config: read file /etc/mail/spamassassin/70_sare_evilnum0.cf
    [7661] dbg: config: read file /etc/mail/spamassassin/70_sare_evilnum1.cf
    [7661] dbg: config: read file /etc/mail/spamassassin/70_sare_evilnum2.cf
    [7661] dbg: config: read file /etc/mail/spamassassin/70_sare_html.cf
    [7661] dbg: config: read file /etc/mail/spamassassin/70_sare_obfu.cf
    [7661] dbg: config: read file /etc/mail/spamassassin/70_sare_oem.cf
    [7661] dbg: config: read file /etc/mail/spamassassin/70_sare_random.cf
    [7661] dbg: config: read file /etc/mail/spamassassin/70_sare_stocks.cf
    [7661] dbg: config: read file /etc/mail/spamassassin/70_sc_top200.cf
    [7661] dbg: config: read file /etc/mail/spamassassin/99_FVGT_Tripwire.cf
    [7661] dbg: config: read file /etc/mail/spamassassin/Chinese_rules.cf
    [7661] dbg: config: read file /etc/mail/spamassassin/local.cf
    [7661] dbg: config: read file /etc/mail/spamassassin/weeds.cf
    [7661] dbg: config: using "/var/clamav/.spamassassin/user_prefs" for user prefs file
    [7661] dbg: config: read file /var/clamav/.spamassassin/user_prefs
    [7661] dbg: plugin: loading Mail::SpamAssassin::Plugin::URIDNSBL from @INC
    [7661] dbg: plugin: registered Mail::SpamAssassin::Plugin::URIDNSBL=HASH(0x1e0803c)
    [7661] dbg: plugin: loading Mail::SpamAssassin::Plugin::Hashcash from @INC
    [7661] dbg: plugin: registered Mail::SpamAssassin::Plugin::Hashcash=HASH(0x1c7ac1c)
    [7661] dbg: plugin: loading Mail::SpamAssassin::Plugin::SPF from @INC
    [7661] dbg: plugin: registered Mail::SpamAssassin::Plugin::SPF=HASH(0x1c843a8)
    [7661] dbg: plugin: loading Mail::SpamAssassin::Plugin::Pyzor from @INC
    [7661] dbg: pyzor: network tests on, attempting Pyzor
    [7661] dbg: plugin: registered Mail::SpamAssassin::Plugin::Pyzor=HASH(0x1de8604)
    [7661] dbg: plugin: loading Mail::SpamAssassin::Plugin::Razor2 from @INC
    [7661] dbg: razor2: razor2 is available, version 2.82
    [7661] dbg: plugin: registered Mail::SpamAssassin::Plugin::Razor2=HASH(0x466a74)
    [7661] dbg: plugin: loading Mail::SpamAssassin::Plugin::SpamCop from @INC
    [7661] dbg: reporter: network tests on, attempting SpamCop
    [7661] dbg: plugin: registered Mail::SpamAssassin::Plugin::SpamCop=HASH(0x468b2c)
    [7661] dbg: plugin: loading Mail::SpamAssassin::Plugin::AWL from @INC
    [7661] dbg: plugin: registered Mail::SpamAssassin::Plugin::AWL=HASH(0x1c73978)
    [7661] dbg: plugin: loading Mail::SpamAssassin::Plugin::AutoLearnThreshold from @INC
    [7661] dbg: plugin: registered Mail::SpamAssassin::Plugin::AutoLearnThreshold=HASH(0x1c595a8)
    [7661] dbg: plugin: loading Mail::SpamAssassin::Plugin::WhiteListSubject from @INC
    [7661] dbg: plugin: registered Mail::SpamAssassin::Plugin::WhiteListSubject=HASH(0x1c5c088)
    [7661] dbg: plugin: loading Mail::SpamAssassin::Plugin::MIMEHeader from @INC
    [7661] dbg: plugin: registered Mail::SpamAssassin::Plugin::MIMEHeader=HASH(0x1c863d8)
    [7661] dbg: plugin: loading Mail::SpamAssassin::Plugin::ReplaceTags from @INC
    [7661] dbg: plugin: registered Mail::SpamAssassin::Plugin::ReplaceTags=HASH(0x1c8ad08)
    [7661] dbg: config: adding redirector regex: /^http:\/\/chkpt\.zdnet\.com\/chkpt\/\w+\/(.*)$/i
    [7661] dbg: config: adding redirector regex: /^http:\/\/www(?:\d+)?\.nate\.com\/r\/\w+\/(.*)$/i
    [7661] dbg: config: adding redirector regex: /^http:\/\/.+\.gov\/(?:.*\/)?externalLink\.jhtml\?.*url=(.*?)(?:&.*)?$/i
    [7661] dbg: config: adding redirector regex: /^http:\/\/redir\.internet\.com\/.+?\/.+?\/(.*)$/i
    [7661] dbg: config: adding redirector regex: /^http:\/\/(?:.*?\.)?adtech\.de\/.*(?:;|\|)link=(.*?)(?:;|$)/i
    [7661] dbg: config: adding redirector regex: m'^http.*?/redirect\.php\?.*(?<=[?&])goto=(.*?)(?:$|[&\#])'i
    [7661] dbg: config: adding redirector regex: m'^https?:/*(?:[^/]+\.)?emf\d\.com/r\.cfm.*?&r=(.*)'i
    [7661] dbg: config: adding redirector regex: m'/(?:index.php)?\?.*(?<=[?&])URL=(.*?)(?:$|[&\#])'i
    [7661] dbg: config: adding redirector regex: m'^http:/*(?:\w+\.)?google(?:\.\w{2,3}){1,2}/url\?.*?(?<=[?&])q=(.*?)(?:$|[&\#] )'i
    [7661] dbg: config: adding redirector regex: m'^http:/*(?:\w+\.)?google(?:\.\w{2,3}){1,2}/search\?.*?(?<=[?&])q=[^&]*?(?<=%2 0|..[=+\s])site:(.*?)(?:$|%20|[\s+&\#])'i
    [7661] dbg: config: adding redirector regex: m'^http:/*(?:\w+\.)?google(?:\.\w{2,3}){1,2}/search\?.*?(?<=[?&])q=[^&]*?(?<=%2 0|..[=+\s])(?:"|%22)(.*?)(?:$|%22|["\s+&\#])'i
    [7661] dbg: config: adding redirector regex: m'^http:/*(?:\w+\.)?google(?:\.\w{2,3}){1,2}/translate\?.*?(?<=[?&])u=(.*?)(?:$ |[&\#])'i
    [7661] dbg: plugin: Mail::SpamAssassin::Plugin::ReplaceTags=HASH(0x1c8ad08) implements 'finish_parsing_end'
    [7661] dbg: replacetags: replacing tags
    [7661] dbg: replacetags: done replacing tags
    [7661] dbg: bayes: tie-ing to DB file R/O /var/clamav/.spamassassin/bayes_toks
    [7661] dbg: bayes: tie-ing to DB file R/O /var/clamav/.spamassassin/bayes_seen
    [7661] dbg: bayes: found bayes db version 3
    [7661] dbg: bayes: DB journal sync: last sync: 1159452604
    [7661] dbg: config: score set 3 chosen.
    [7661] dbg: message: ---- MIME PARSER START ----
    [7661] dbg: message: main message type: text/plain
    [7661] dbg: message: parsing normal part
    [7661] dbg: message: added part, type: text/plain
    [7661] dbg: message: ---- MIME PARSER END ----
    [7661] dbg: dns: name server: 209.198.128.11, family: 2, ipv6: 0
    [7661] dbg: dns: testing resolver nameservers: 209.198.128.11, 209.198.128.27
    [7661] dbg: dns: trying (3) linux.org...
    [7661] dbg: dns: looking up NS for 'linux.org'
    [7661] dbg: dns: NS lookup of linux.org using 209.198.128.11 succeeded => DNS available (set dns_available to override)
    [7661] dbg: dns: is DNS available? 1
    [7661] dbg: metadata: X-Spam-Relays-Trusted:
    [7661] dbg: metadata: X-Spam-Relays-Untrusted:
    [7661] dbg: metadata: X-Spam-Relays-Internal:
    [7661] dbg: metadata: X-Spam-Relays-External:
    [7661] dbg: message: no encoding detected
    [7661] dbg: plugin: Mail::SpamAssassin::Plugin::URIDNSBL=HASH(0x1e0803c) implements 'parsed_metadata'
    [7661] dbg: uridnsbl: domains to query:
    [7661] dbg: dns: checking RBL sbl-xbl.spamhaus.org., set sblxbl-lastexternal
    [7661] dbg: dns: checking RBL sa-accredit.habeas.com., set habeas-firsttrusted
    [7661] dbg: dns: checking RBL sbl-xbl.spamhaus.org., set sblxbl
    [7661] dbg: dns: checking RBL sa-other.bondedsender.org., set bsp-untrusted
    [7661] dbg: dns: checking RBL combined.njabl.org., set njabl-lastexternal
    [7661] dbg: dns: checking RBL combined.njabl.org., set njabl
    [7661] dbg: dns: checking RBL combined-HIB.dnsiplists.completewhois.com., set whois
    [7661] dbg: dns: checking RBL list.dsbl.org., set dsbl-lastexternal
    [7661] dbg: dns: checking RBL bl.spamcop.net., set spamcop
    [7661] dbg: dns: checking RBL sa-trusted.bondedsender.org., set bsp-firsttrusted
    [7661] dbg: dns: checking RBL combined-HIB.dnsiplists.completewhois.com., set whois-lastexternal
    [7661] dbg: dns: checking RBL dnsbl.sorbs.net., set sorbs-lastexternal
    [7661] dbg: dns: checking RBL dnsbl.sorbs.net., set sorbs
    [7661] dbg: dns: checking RBL iadb.isipp.com., set iadb-firsttrusted
    [7661] dbg: check: running tests for priority: 0
    [7661] dbg: rules: running header regexp tests; score so far=0
    [7661] dbg: rules: ran header rule __HAS_MSGID ======> got hit: "<"@lint_rules>"
    [7661] dbg: rules: ran header rule NO_REAL_NAME ======> got hit: "[email protected]
    [7661] dbg: rules: "
    [7661] dbg: rules: ran header rule __MSGID_OK_DIGITS ======> got hit: "1159453076"
    [7661] dbg: spf: no suitable relay for spf use found, skipping SPF-helo check
    [7661] dbg: eval: all '*From' addrs: [email protected]
    [7661] dbg: eval: all '*To' addrs:
    [7661] dbg: spf: no suitable relay for spf use found, skipping SPF check
    [7661] dbg: rules: ran eval rule NO_RELAYS ======> got hit
    [7661] dbg: spf: cannot get Envelope-From, cannot use SPF
    [7661] dbg: spf: def_spf_whitelist_from: could not find useable envelope sender
    [7661] dbg: rules: ran eval rule __UNUSABLE_MSGID ======> got hit
    [7661] dbg: spf: spf_whitelist_from: could not find useable envelope sender
    [7661] dbg: rules: running body-text per-line regexp tests; score so far=0.96
    [7661] dbg: rules: ran body rule __NONEMPTY_BODY ======> got hit: "I"
    [7661] dbg: uri: running uri tests; score so far=0.96
    [7661] dbg: bayes: DB journal sync: last sync: 1159452604
    [7661] dbg: bayes: corpus size: nspam = 4348, nham = 68816
    [7661] dbg: bayes: score = 0.451072137096852
    [7661] dbg: bayes: DB journal sync: last sync: 1159452604
    [7661] dbg: bayes: untie-ing
    [7661] dbg: bayes: untie-ing db_toks
    [7661] dbg: bayes: untie-ing db_seen
    [7661] dbg: rules: ran eval rule BAYES_50 ======> got hit
    [7661] dbg: rules: running raw-body-text per-line regexp tests; score so far=0.961
    [7661] dbg: rules: running full-text regexp tests; score so far=0.961
    [7661] dbg: info: entering helper-app run mode
    [7661] dbg: info: leaving helper-app run mode
    [7661] dbg: razor2: part=0 engine=4 contested=0 confidence=0
    [7661] dbg: razor2: results: spam? 0
    [7661] dbg: razor2: results: engine 8, highest cf score: 0
    [7661] dbg: razor2: results: engine 4, highest cf score: 0
    [7661] dbg: util: current PATH is: /opt/local/bin:/opt/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin
    [7661] dbg: pyzor: pyzor is not available: no pyzor executable found
    [7661] dbg: pyzor: no pyzor found, disabling Pyzor
    [7661] dbg: plugin: Mail::SpamAssassin::Plugin::URIDNSBL=HASH(0x1e0803c) implements 'check_tick'
    [7661] dbg: check: running tests for priority: 500
    [7661] dbg: plugin: Mail::SpamAssassin::Plugin::URIDNSBL=HASH(0x1e0803c) implements 'check_post_dnsbl'
    [7661] dbg: rules: running meta tests; score so far=0.961
    [7661] info: rules: meta test DIGEST_MULTIPLE has undefined dependency 'DCC_CHECK'
    [7661] info: rules: meta test SARE_OBFU_CIALIS has undefined dependency 'SARE_OBFU_CIALIS2'
    [7661] info: rules: meta test FP_MIXED_****3 has undefined dependency 'FP_PENETRATION'
    [7661] dbg: rules: running header regexp tests; score so far=2.907
    [7661] dbg: rules: running body-text per-line regexp tests; score so far=2.907
    [7661] dbg: uri: running uri tests; score so far=2.907
    [7661] dbg: rules: running raw-body-text per-line regexp tests; score so far=2.907
    [7661] dbg: rules: running full-text regexp tests; score so far=2.907
    [7661] dbg: check: running tests for priority: 1000
    [7661] dbg: rules: running meta tests; score so far=2.907
    [7661] dbg: rules: running header regexp tests; score so far=2.907
    [7661] dbg: config: using "/var/clamav/.spamassassin" for user state dir
    [7661] dbg: locker: safe_lock: created /var/clamav/.spamassassin/auto-whitelist.lock.xserve1.topequip.com.7661
    [7661] dbg: locker: safe_lock: trying to get lock on /var/clamav/.spamassassin/auto-whitelist with 0 retries
    [7661] dbg: locker: safe_lock: link to /var/clamav/.spamassassin/auto-whitelist.lock: link ok
    [7661] dbg: auto-whitelist: tie-ing to DB file of type DB_File R/W in /var/clamav/.spamassassin/auto-whitelist
    [7661] dbg: auto-whitelist: db-based [email protected]|ip=none scores 0/0
    [7661] dbg: auto-whitelist: AWL active, pre-score: 2.907, autolearn score: 2.907, mean: undef, IP: undef
    [7661] dbg: auto-whitelist: DB addr list: untie-ing and unlocking
    [7661] dbg: auto-whitelist: DB addr list: file locked, breaking lock
    [7661] dbg: locker: safe_unlock: unlink /var/clamav/.spamassassin/auto-whitelist.lock
    [7661] dbg: auto-whitelist: post auto-whitelist score: 2.907
    [7661] dbg: rules: running body-text per-line regexp tests; score so far=2.907
    [7661] dbg: uri: running uri tests; score so far=2.907
    [7661] dbg: rules: running raw-body-text per-line regexp tests; score so far=2.907
    [7661] dbg: rules: running full-text regexp tests; score so far=2.907
    [7661] dbg: check: is spam? score=2.907 required=2
    [7661] dbg: check: tests=BAYES_50,MISSING_SUBJECT,NO_REAL_NAME,NO_RECEIVED,NO_RELAYS,TO_CC_NONE
    [7661] dbg: check: subtests=__HAS_MSGID,__MSGID_OK_DIGITS,__MSGID_OK_HOST,__NONEMPTY_BODY,__SANE_M SGID,__UNUSABLE_MSGID
    Any ideas and help is greatly appreciated.

    Hi,
    As you  mentioned that your computer shows activated successfully in Properties of computer but the error message still appeared to notify you.
    If the messages you see come and go, you have what's called a 'race condition' where the Software Protection Service and another process fight over resources - if the SPPSVC doesn't win the race in a given time, it will throw the notification until it manages
    to re-test.
    Please try to restore your system back to the point when it work fine. If the issue persists, please try to scan your system to see if there is any malwares or virus.
    Also, let try this fix to check Software Protection service:
    This computer is not running genuine windows 0x8004fe21
    http://blog.teliaz.com/2012/this-computer-is-not-running-genuine-windows-0x8004fe21/comment-page-1#comment-1423
    For further help, please upload the event log for research:
    Collect event log:
    http://windows.microsoft.com/en-us/windows7/what-information-appears-in-event-logs-event-viewer
    Hope these could be helpful.
    Kate Li
    TechNet Community Support

  • Unable to create foreign key: InvalidArgument=Value of '0' is not valid for 'index'. Parameter name: index

    I am running an SQL(CE) script to create a DB. All script commands succeed, but the DB get "broken" after creating the last costaint: after running the script, viewing table properties of Table2 and clicking on "Manage relations" gives the following error: Unable to create foreign key: InvalidArgument=Value of '0' is not valid for 'index'. Parameter name: index. Wondering what does that refer to...
    Here it is the script. Please note that no error is thrown by running the following queries (even from code that passing the queries by hand, one-by-one to sql server management studio).
    CREATE TABLE [table1] (
    [id_rubrica] numeric(18,0) NOT NULL
    , [id_campo] numeric(18,0) NOT NULL
    , [nome] nvarchar(100) NOT NULL
    GO
    ALTER TABLE [table1] ADD PRIMARY KEY ([id_rubrica],[id_campo]);
    GO
    CREATE UNIQUE INDEX [UQ__m_campi] ON [table1] ([id_campo] Asc);
    GO
    CREATE TABLE [table2] (
    [id_campo] numeric(18,0) NOT NULL
    , [valore] nvarchar(4000) NOT NULL
    GO
    ALTER TABLE [table2] ADD PRIMARY KEY ([id_campo],[valore]);
    GO
    ALTER TABLE [table2] ADD CONSTRAINT [campo_valoriFissi] FOREIGN KEY ([id_campo]) REFERENCES [table1]([id_campo]);
    GO
    Sid (MCP - http://www.sugata.eu)

    I know this is kind of old post, but did this realy solved your problem?
    I'm getting this same error message after adding a FK constraint via UI on VS2008 Server Explorer.
    I can add the constraint with no errors, but the constraint is not created on the DataSet wizard (strongly typed datasets on Win CE 6) and when I click "Manage Relations" on the "Table Properties" this error pop out:
    "InvalidArgument=Value or '0' is not valid for 'index'.
    Parameter name: index"
    Even after vreating my table with the relation in SQL the same occurs:
    CREATE TABLE pedidosRastreios (
        idPedidoRastreio INT NOT NULL IDENTITY PRIMARY KEY,
        idPedido INT NOT NULL CONSTRAINT FK_pedidosRastreios_pedidos REFERENCES pedidos(idPedido) ON DELETE CASCADE,
        codigo NVARCHAR(20) NOT NULL

  • Query takes longer to run with indexes.

    Here is my situation. I had a query which I use to run in Production (oracle 9.2.0.5) and Reporting database (9.2.0.3). The time taken to run in both databases was almost the same until 2 months ago which was about 2 minutes. Now in production the query does not run at all where as in Reporting it continues to run in about 2 minutes. Some of the things I obsevred in P are 1) the optimizer_index_cost_adj parameter was changed to 20 from 100 in order to improve the performance of a paycalc program about 3 months ago. Even with this parameter being set to 20, the query use to run in 2 minutes until 2 months ago. in the last two months the GL table increased in size from 25 million rows to 27 million rows. With optimizer_index_cost_adj of 20 and Gl table of 25 million rows it runs fine, but with 27 million rows it does not run at all. If I change the value of optimizer_index_cost_adj to 100 then the query runs with 27 million rows in 2 minutes and I found that it uses full table scan. In Reporting database it always used full table sacn as found thru explain plan. CBO determines which scan is best and it uses that. So my question is that by making optimizer_index_cost_adj = 20, does oracle forces it to use index scan when the table size is 27 million rows? Isn't the index scan is not faster than full table scan? In what situation the full table scan is faster than index scan? If I drop all the indexes on the GL table then the query runs faster in production as it uses full table scan. What is the real benefit of changing optimizer_index_cost_adj values? Any input is most welcome.

    Isn't the index scan is not faster than full table scan? In what situation the full table scan is faster than index scan? No. It is not about which one is the "+fastest+" as that concept is flawed. How can an index be "faster" than a table for example? Does it have better tires and shinier paint job? ;-)
    It is about the amount of I/O that the database needs to perform in order to use that object's contents for resolving/executing that applicable SQL statement.
    If the CBO determines that it needs a 100 widgets worth of I/O to scan the index, and then another 100 widgets of I/O to scan the table, it may decide to not use the index at all, as a full table scan will cost only a 180 I/O widgets - 20 less than the combined scanning of index and table.
    Also, a full scan can make use of multi-block reads - and this, on most storage/file systems, is faster than single block reads.
    So no - a full table scan is NOT a Bad Thing (tm) and not an indicator of a problem. The thing that is of concern is the amount of I/O. The more I/O, the slower the operation. So obviously, we want to make sure that we design SQL that requires the minimal amount of I/O, design a database that support minimal I/O to find the required data (using clusters/partitions/IOTs/indexes/etc), and then check that the CBO also follows suit (which can be the complex bit).
    But before questioning the CBO, first question your code and design - and whether or not they provide the optimal (smallest) I/O footprint for the job at hand.

  • Why does it not use the index?

    L.S.,
    We are using a table called IT_RFC_REGISTRATION. It is a relatively big table for our application.
    Its primary key is RFCNR, each new RFCNR getting the next value.
    Now for my intranet report I am interested in the last 40 records. But when I execute:
    SELECT *
    FROM IT_RFC_REGISTRATION
    ORDER BY RFCNR DESC
    the query takes ages to execute.
    When I do this:
    SELECT RFCNR
    FROM IT_RFC_REGISTRATION
    ORDER BY RFCNR DESC
    the result comes instantaneous because this query uses the index on RFCNR.
    Why does the former query not use the index to execute? It should be much faster to fetch ROWIDs from the index end to start and use those to get the records, than to load all the records and then sort them.
    Is there a trick with which I can use a join of the latter query and the former query to speed up the result?
    Greetings,
    Philbert de Zwart,
    Utrecht, The Netherlands.

    The difference you see in query run time is based on the amount data being sorted, then returned. In the first query, a full table scan is faster since if the index was used, Oracle would have to do a lookup in the index, get the rowid's and go look up the data in the table (TWO disk i/o's). It's faster to just scan the entire table.
    Indexes will generally not be used unless you have a where clause. If you only need a few fields from the table, you could include them all in an index. For instance, if you only need RFCNR & DESC create a concatenated index on those two columns and then only a scan of the index is required (very fast).

  • Published report will not run; works fine in VS2010 debug mode.

    Have 64bit Visual Studio Ultimate installed, CRv13_1 and MSSQL 2008 R2.
    The issue I am having is the report work like a champ when I run the web application in debug mode. IE I hit the run button in Visual Studio, However when I publish the application the report will not run. I get various errors depending on how I have connected to report to MSSQL. I have tried ODBC and OLE DB connections. I have tried Windows authentication and MSSQL user login. The results are the same. the report works fine in debug mode but will not work when published. The most common error message is :
    Error
    Database logon failed.
    or
    Login failed for user 'NT AUTHORITY\NETWORK SERVICE'
    Depending on how I am authenticating. I have Google this issue and there are tons of posts about it but no solutions. Any help would be appreciated.

    See if the article [Troubleshooting Guide to Database Connectivity Issues with Crystal Reports in Visual Studio .NET Applications|http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/b0225775-88c4-2c10-bd80-8298769293de] helps.
    I'd settle on the prefered connection method and troubleshoot that. Use the search box in the top right corner of this web page. It will bring up KBases, blogs, wikis, articles and more. Also, search these forums. There is lots of info here and lots of answers.
    - Ludek

  • WebApp not running of using DeployTool

    When I deploy an EAR with a Web project from the Eclipse IDE, the deployment is going fine and the website is available immediately.
    But when I use the deployment tool - with a deploy-manager-config.xml - the website is NOT running after deployment.... What can be the difference. The output looks very similar.
    The output from Eclipse is:
    Jun 19, 2006 9:07:20 AM  Info: -------------------------- Starting deployment ------------------------
    Jun 19, 2006 9:07:20 AM  Info: Error handling strategy: OnErrorStop
    Jun 19, 2006 9:07:20 AM  Info: Prerequisite error handling strategy: OnPrerequisiteErrorStop
    Jun 19, 2006 9:07:20 AM  Info: Update strategy: UpdateAllVersions
    Jun 19, 2006 9:07:20 AM  Info: Starting deployment prerequisites:
    Jun 19, 2006 9:07:20 AM  Info: Loading selected archives...
    Jun 19, 2006 9:07:20 AM  Info: Loading archive 'C:usrsapJ2EJC00SDMprogramtemptemp8858MyAppJUnitApp.ear'
    Jun 19, 2006 9:07:20 AM  Info: Selected archives successfully loaded.
    Jun 19, 2006 9:07:20 AM  Info: Actions per selected component:
    Jun 19, 2006 9:07:20 AM  Info: Update: Selected development component 'MyAppJUnitApp'/'MyApp.com'/'localhost'/'2006.06.19.09.04.15' updates currently deployed development component 'MyAppJUnitApp'/'MyApp.com'/'localhost'/'2006.06.19.08.50.10'.
    Jun 19, 2006 9:07:21 AM  Info: Ending deployment prerequisites. All items are correct.
    Jun 19, 2006 9:07:21 AM  Info: Saved current Engine state.
    Jun 19, 2006 9:07:21 AM  Info: Starting: Update: Selected development component 'MyAppJUnitApp'/'MyApp.com'/'localhost'/'2006.06.19.09.04.15' updates currently deployed development component 'MyAppJUnitApp'/'MyApp.com'/'localhost'/'2006.06.19.08.50.10'.
    Jun 19, 2006 9:07:21 AM  Info: SDA to be deployed: C:usrsapJ2EJC00SDMrootoriginMyApp.comMyAppJUnitApplocalhost2006.06.19.09.04.15temp8858MyAppJUnitApp.ear
    Jun 19, 2006 9:07:21 AM  Info: Software type of SDA: J2EE
    Jun 19, 2006 9:07:21 AM  Info: ***** Begin of SAP J2EE Engine Deployment (J2EE Application) *****
    Jun 19, 2006 9:07:26 AM  Info: Begin of log messages of the target system:
    06/06/19 09:07:21 -  ***********************************************************
    06/06/19 09:07:22 -  Start updating EAR file...
    06/06/19 09:07:22 -  start-up mode is lazy
    06/06/19 09:07:23 -  EAR file updated successfully for 906ms.
    06/06/19 09:07:23 -  Start updating...
    06/06/19 09:07:23 -  EAR file uploaded to server for 766ms.
    06/06/19 09:07:25 -  Successfully updated. Update took 1969ms.
    06/06/19 09:07:25 -  Deploy Service status:
    06/06/19 09:07:25 -    Application : MyApp.com/MyAppJUnitApp
    06/06/19 09:07:25 -   
    06/06/19 09:07:25 -    MyAppJUnitWeb  - WEB
    06/06/19 09:07:25 -    MyApp.com/MyAppJUnitApp  - METAMODELREPOSITORY
    06/06/19 09:07:25 -  ***********************************************************
    Jun 19, 2006 9:07:26 AM  Info: End of log messages of the target system.
    Jun 19, 2006 9:07:26 AM  Info: ***** End of SAP J2EE Engine Deployment (J2EE Application) *****
    Jun 19, 2006 9:07:26 AM  Info: Finished successfully: development component 'MyAppJUnitApp'/'MyApp.com'/'localhost'/'2006.06.19.09.04.15'
    Jun 19, 2006 9:07:27 AM  Info: J2EE Engine is in same state (online/offline) as it has been before this deployment process.
    Jun 19, 2006 9:07:27 AM  Info: ----------------------- Deployment was successful ---------------------
    The output from the deploytool is:
    build:
            [java] ConfigurationManager: found jar for secure store C:usrsapj2ejc00j2eedeploying......SYSglobalsecuritylibtoolsiaik_jce_export.jar
            [java] ConfigurationManager: found jar for secure store C:usrsapj2ejc00j2eedeploying......SYSglobalsecuritylibtoolsiaik_jsse.jar
            [java] ConfigurationManager: found jar for secure store C:usrsapj2ejc00j2eedeploying......SYSglobalsecuritylibtoolsiaik_smime.jar
            [java] ConfigurationManager: found jar for secure store C:usrsapj2ejc00j2eedeploying......SYSglobalsecuritylibtoolsiaik_ssl.jar
            [java] ConfigurationManager: found jar for secure store C:usrsapj2ejc00j2eedeploying......SYSglobalsecuritylibtoolsw3c_http.jar
            [java] Start updating EAR file...
            [java] 06/06/19 09:04:22 -  Start updating EAR file...
            [java] start-up mode is lazy
            [java] 06/06/19 09:04:22 -  start-up mode is lazy
            [java] EAR file updated successfully for 4000ms.
            [java] 06/06/19 09:04:26 -  EAR file updated successfully for 4000ms.
            [java] Start deploying ...
            [java] 06/06/19 09:04:26 -  Start deploying ...
            [java] dm_msg_0006
            [java] EAR file uploaded to server for 2562ms.
            [java] 06/06/19 09:04:30 -  EAR file uploaded to server for 2562ms.
            [java] Successfully deployed. Deployment took 1844ms.
            [java] 06/06/19 09:04:32 -  Successfully deployed. Deployment took 1844ms.
            [java]   Application : MyApp.com/MyAppJUnitApp
            [java] 06/06/19 09:04:32 -    Application : MyApp.com/MyAppJUnitApp
            [java]  
            [java] 06/06/19 09:04:32 -   
            [java]   MyAppJUnitWeb  - WEB
            [java] 06/06/19 09:04:32 -    MyAppJUnitWeb  - WEB
            [java]   MyApp.com/MyAppJUnitApp  - METAMODELREPOSITORY
            [java] 06/06/19 09:04:32 -    MyApp.com/MyAppJUnitApp  - METAMODELREPOSITORY

    It works!!
    I made that ANT deploytask from one of those pdf documents.
    It is working fine, but still I have a question:
    Why is ant unable to find basic classes as BaseException and others..???
    Now I have to include the whole plugins folder to avoid listing all the jar files which are necessary.
    [code]
    <taskdef name="deploy"
             classname="com.sap.deploy.DeployEarTask">
       <classpath>
          <pathelement location="${deploy.task.class}"/>
          <pathelement location="C:/Program Files/SAP/JDT/eclipse/plugins/org.apache.ant_1.5.3/ant.jar"/>
          <pathelement location="C:/usr/sap/J2E/JC00/j2ee/deploying/sapj2eenginedeploy.jar"/>
          <fileset dir="C:/Program Files/SAP/JDT/eclipse/plugins"
                includes="*/.jar"></fileset>
        </classpath>
    </taskdef>
    [/code]

Maybe you are looking for