Correlation ID Result from Log file

Area : SharePoint Foundation Category : Monitoring Level : High EventID : b4ly Message : Leaving Monitored Scope (PostResolveRequestCacheHandler). Execution Time=6.92841879464484 Area : SharePoint Foundation Category : Monitoring Level :
High EventID : b4ly Message : Leaving Monitored Scope (EnsureListItemsData). Execution Time=11.286 7567953774 Area : SharePoint Foundation Category : Monitoring Level : High EventID : b4ly Message : Leaving Monitored Scope (EnsureListItemsData). Execution
Time=10.455 6586124602 Area : SharePoint Foundation Category : Monitoring Level : High EventID : b4ly Message : Leaving Monitored Scope (EnsureListItemsData#1). Execution Time=11.0 272439240317 Area : SharePoint Foundation Category : Monitoring Level : High
EventID : b4ly Message : Leaving Monitored Scope (CachedObjectFactory: Caching ListItem at: / Pages/Home.aspx). Execution Time=54.5198765507126 Area : SharePoint Foundation Category : Database Level : Monitorable EventID : fa42 Message : A large block of literal
text was sent to sql. This can result in b locking in sql and excessive memory use on the front end. Verify th at no binary parameters are being passed as literals, and consider b reaking up batches into smaller components. If this request is for a SharePoint
list or list item, you may be able to resolve this by r educing the number of fields. Area : SharePoint Foundation Category : Database Level : High EventID : fa43 Message : Slow Query Duration: 22.711893080697 Area : SharePoint Foundation Category : Database
Level : High EventID : fa44 Message : Slow Query StackTrace-Managed: at Microsoft.SharePoint.Utilities. SqlSession.OnPostExecuteCommand(SqlCommand command, SqlQueryData mon itoringData) at Microsoft.SharePoint.Utilities.SqlSession.Execut eReader(SqlCommand
command, CommandBehavior behavior, SqlQueryData m onitoringData, Boolean retryForDeadLock) at Microsoft.SharePoint .SPSqlClient.ExecuteQueryInternal(Boolean retryfordeadlock) at M icrosoft.SharePoint.SPSqlClient.ExecuteQuery(Boolean retryfordeadloc k) at Microsoft.SharePoint.Library.SPRequestInternalClass.CrossL
istQuery(String bstrUrl, String bstrXmlWebs, String bstrXmlLists, St ring bstrXmlQuery, ISP2DSafeArrayWriter pCallback, Object& pvarColum ns) at Microsoft.SharePoint.Library.SPRequest.CrossListQuery(Str ing bstrUrl, String bstrXmlWebs, String bstrXmlLists,
String bstrXml Query, ISP2DSafeArrayWriter pCallback, Object& pvarColumns) at M icrosoft.SharePoint.SPWeb.GetSiteData(SPSiteDataQuery query) at Microsoft.SharePoint.Publishing.CachedArea.GetSiteData(SPWeb web, SP SiteDataQuery siteDataQuery, Boolean useSpQueryOnList)
at Micros oft.SharePoint.Publishing.CachedArea.GetCrossListQueryResults(SPSite DataQuery query, SPWeb currentContext, Boolean onlyPopulateCache, Bo olean useSpQueryOnList, Int32 lcid) at Microsoft.SharePoint.Publ

Do you know if there is a particular list or library in your SharePoint environment that contains a large number of columns?
Area : SharePoint Foundation
Category : Database
Level : Monitorable
EventID : fa42
Message : "A large block of literal text was sent to sql. This can result in blocking in sql and excessive memory use on the front end. Verify that no binary parameters are being passed as literals, and consider breaking up batches into smaller components. If this request is for a SharePoint list or list item, you may be able to resolve this by reducing the number of fields."

Similar Messages

  • Problem to send result from log file, the logfile is to large

    Hi SCOM people!
    I have problem when monitoring a log file on a Red Hat system, I get a alert that tells me that the log file is too large to send (see the alert context below).I guess that the problem is that the server logs to much between the 5 minutes that SCOM checks.
    Any ideas how to solve this?
    Date and Time: 2014-07-24 19:50:24
    Log Name: Operations Manager
    Source: Cross Platform Modules
    Event Number: 262
    Level: 1
    Logging Computer: XXXXX.samba.net
    User: N/A
     Description:
    Error scanning logfile / xxxxxxxx / server.log on values ​​xxxxx.xxxxx.se as user <SCXUser><UserId>xxxxxx</UserId><Elev></Elev></SCXUser>; The operation succeeded and cannot be reversed but the result is too large to send.
    Event Data:
    < DataItem type =" System.XmlData " time =" 2014-07-24T19:50:24.5250335+02:00 " sourceHealthServiceId =" 2D4C7DFF-BA83-10D5-9849-0CE701139B5B " >
    < EventData >
      < Data > / xxxxxxxx / server.log </ Data >
      < Data > ​​xxxxx.xxxxx.se </ Data >
      < Data > <SCXUser><UserId>xxxxxx</UserId><Elev></Elev></SCXUser> </ Data >
      < Data > The operation succeeded and cannot be reversed but the result is too large to send. </ Data >
      </ EventData >
      </ DataItem >

    Hi Fredrik,
    At any one time, SCX can return 500 matching lines. If you're trying to return > 500 matching lines, then SCX will throttle your limit to 500 lines (that is, it'll return 500 lines, note where it left off, and pick up where it left off next time log files
    are scanned).
    Now, be aware that Operations Manager will "cook down" multiple regular expressions to a single agent query. This is done for efficiency purposes. What this means: If you have 10 different, unrelated regular expressions against a single log file, all of
    these will be "cooked down" and presented to the agent as one single request. However, each of these separate regular expressions, collectively, are limited to 500 matching lines. Hope this makes sense.
    This limit is set because (at least at the time) we didn't think Operations Manager itself could handle a larger response on the management server itself. That is, it's not an agent issue as such, it's a management server issue.
    So, with that in mind, you have several options:
    If you have separate RegEx expressions, you can reconfigure your logging (presumably done via syslog?) to log your larger log messages to a separate log file. This will help "cook down", but ultimately, the limit of 500 RegEx results is still there; you're
    just mitigating cook down.
    If a single RegEx expression is matching > 500 lines, there is no workaround to this today. This is a hardcoded limit in the agent, and can't be overridden.
    Now, if you're certain that your regular expression is matching < 500 lines, yet you're getting this error, then I'd suggest contacting Microsoft Support Services to open an RFC and have this issue escalated to the product team. Due to a logging issue
    within logfilereader, I'm not certain you can enable tracing to see exactly what's going on (although you could use command line queries to see what's happening internally). This is involved enough where it's best to get Microsoft Support involved.
    But as I said, this is only useful if you're certain that your regular expression is matching < 500 lines. If you are matching more than this, this is a known restriction today. But with an RFC, even that could at least be evaluated to see exactly the
    load > 500 matches will have on the management server.
    /Jeff

  • SQL Server 2012 Developer Edition will not install. Setup files don't even get copied completely. Win 8.1. ACT instance is loaded & can't be deleted. From log file: Error: Action "PreMsiTimingConfigAction" failed during execution.

    SQL Server 2012 Developer Edition will not install.  Setup files don't even get copied completely.  Win 8.1.  ACT instance is loaded & can't be deleted. From log file: Error: Action "PreMsiTimingConfigAction" failed during execution.

    Hello,
    I am glad it worked.
    Thank you for visiting MSDN forums!
    Regards,
    Alberto Morillo
    SQLCoffee.com

  • How a client could retrieve interested data from log file in dbxml?

    I have heard about db_printlog utility in BDB. Is there any similar utility (like shell command or API) in bdbxml?. I want to extract updated nodes from log file.

    I got Berkeley DB 4.8.6 and find db_printlog. It works on bdbxml log file . I used it to translate a log file.
    I used it with following syntax:
    In WinXp Command Prompt:
    Microsoft Windows XP [Version 5.1.2600]
    (C) Copyright 1985-2001 Microsoft Corp.
    D:\Program Files\Oracle\tmp>db_printlog >mylog.txt
    D:\Program Files\Oracle\tmp>then mylog.txt created. here is some part of it:
    [1][218264]__dbreg_register: rec: 2 txnp 80000017 prevlsn [1][210024]
         opcode: 3
         name: a.dbxml0
         uid: 0xc1 y0 0 0 0 40 0x91 0xed 0xde 0x1d 0 0 0 0 0 0 0 0
         fileid: 37
         ftype: 0x1
         meta_pgno: 22
         id: 0x0
    [1][218348]__txn_child: rec: 12 txnp 80000015 prevlsn [1][184288]
         child: 0x80000017
         c_lsn: [1][218264]
    [1][218388]__db_addrem: rec: 41 txnp 80000015 prevlsn [1][218348]
         opcode: 1
         fileid: 35
         pgno: 21
         indx: 0
         nbytes: 8
         hdr:
         dbt: J3
         pagelsn: [1][193024]
    and so onBut i couldn't find what i was looking for, i mean the Nodes path.

  • How to remove extra blank spaces from log file.

    i have created a log file which contain the logs of a program. the logs are getting created fine for the first execution of the program but from the second time the log file is getting blank space's after every character.
    i have used the code to append the text in log files few examples : 
     'Clean up process started....' >> $log
    File name :'+ $files +'Time-Stamp: '+ $endtime +'Search complete.' >> $log 
    .i want to remove the extra spaces after each character but not all the spaces from the file. Thanks in advance.

    hi  mjolinor
    add-content is used to insert text it does not append text at the file .i used >> to append the text on log file ..
    from 
    Get-Help ADd-Content
    Synopsis
    Appends content, such as words or data, to a file.
    [string](0..33|%{[char][int](46+("686552495351636652556262185355647068516270555358646562655775 0645570").substring(($_*2),2))})-replace " "

  • Hide XML from LOG File

    Hi All,
    When we run the payment process request, XML is generated and putted in the LOG File, our requirement is to hide XML from the log file. Please let me know how we can do that:
    <?XML version = > <outboundpaymentinstruction><paymentinstructioninfo>
    Please let me know.
    Thanks

    There are two XML formats. The one returned by GetXML is used for XML reports, not sequence files. The XML sequence file format is obtained by changing the FileWritingFormat of the file to XML (seqfile.AsPropertyObjectFile().FileWritingFormat) if it isn't already set to that, and then saving the file to disk. You can serialize the data for the sequence file in a similar XML format using Engine.SerializeObjects() using the serialization option SerializationOption_UseXml, but it will not be a sequence file. You will have to deserialize using Engine.UnserializeObjects to read it back into a sequence file.
    If you explain in more detail what you are trying to do, I could perhaps suggest what might be the best approach.
    -Doug

  • Extract certain account from log file

    i have one ftp log file which includes user accounts and activities.
    I need to pull user with failed activities.  I can search in the
    log file.  Is there a way to extract several user accounts from the log file?
    Thank you!

    Hi,
    That's impossible to tell without seeing at least an example of the data in the log file.
    Don't retire TechNet! -
    (Don't give up yet - 12,575+ strong and growing)

  • Recover from log files

    Hi,
    My question is silly. How do you recover (load database) from existing files that were checkpointed, database itself wasn't closed. Can't find a post about that.
    I tried setting envConfig.setConfigParam("je.cleaner.expunge", "false");
    thanks

    In JE, all you need to do to recover the environment is to reinstantiate the Environment object. (i.e. yourEnv = new Environment(...)) We do talk about recovery as a specific phase in various docs, in order to try to explain some of what is happening under the covers. For the user, all you have to do is "new Environment(...)" whether the environment is brand-new or being re-opened.
    You don't really want to set je.cleaner.expunge to false, unless you're doing some debugging. When that parameter is set, JE will rename instead of remove obsolete log files, and your disk space usage will grow. It's really meant as a debugging option.

  • Analyze test result from txt file

    I need some suggestions from experts. I saved test result in txt file with following format(See attachment). With this test result, I need draw waveform of 'DNBr' based on different OPmode(ADSL2P, VDSL2-8a...). The X-axis will be the value from 'DLS'.  I can creat a complicate VI to do that. But I want to if Labview has any subvi can do this job easier.
    Attachments:
    AW All in One_485847_102744SAV 5T 2h15 PM.txt ‏10 KB

    Hi Tambrook,
    read 3 (header) lines, then read the remaining data lines using "Read spreadsheet file".
    Now you have to select rows by your OPmode as there's no subVI made for your data...
    Best regards,
    GerdW
    CLAD, using 2009SP1 + LV2011SP1 + LV2014SP1 on WinXP+Win7+cRIO
    Kudos are welcome

  • Create CSV from Log File

    I have a set of logs that that I need to run some analysis on. I'm trying to convert them to a CSV format so I can analyze them with Excel or LogParser.
    The basic process I'm trying to implement is:
    1. Search entries for ERROR in log file
    2. Split components of the error line
    3. Create output CSV.
    I was able to address item 1 with this this statement:
    Select-string -path C:\my\logfile\file.log -Pattern "\W\sERROR:\s" -CaseSensitive -AllMatches | select -expand line
    This is a sample of the output lines I'm getting
    01/31/2014 14:07:00 USERC <25000> ERROR: 1222 - SQLSTATE: HYT00 - Execute failed:  Lock request time out period exceeded. on line 9461 in Quote Entry - Payable, Accounts
    01/31/2014 14:07:11 USERD <25000> ERROR: 1222 - SQLSTATE: HYT00 - Execute failed:  Lock request time out period exceeded. on line 324491 in &Orders on object SELECT_ORD_BTN.
    01/31/2014 15:49:34 USERG <25000> ERROR: 50000 - SQLSTATE: 42000 - Execute failed:  \n\nNo Lines entered. Must finish entering lines. on line 281522 in Quote Entry - Payable, Accounts
    01/31/2014 15:51:01 USERH <25000> ERROR: 50000 - SQLSTATE: 42000 - Execute failed:  \n\nNo Lines entered. Must finish entering lines. on line 281522 in Quote Entry - Payable, Accounts
    I'm having issues breaking these lines into its components. Ideally, I'd like to create an output CSV file that breaks the components of these lines thus:
    DATE, TIME, USER, ERROR, ERRORID, MESSAGE
    Following on the method I mentioned above, I was planning to pipe the output into a foreach and perhaps use regex to extract the parts of the line, but I'm having difficulties. I have the regex figured out, and I have tried combining them with the -split
    operator and the select-string, but I'm not having much luck.
    What is the best method to process these lines so I can split them at specific locations in the string?
    Using one of the example outputs above, I need to split the line in the following spots (note the commas)
    01/31/2014, 15:51:01, USERH, <25000> ERROR:, 50000 - SQLSTATE:, 42000 - Execute failed:  \n\nNo Lines entered. Must finish entering lines. on line 281522 in Quote Entry - Payable, Accounts

    Well, I ended up using
    $mydate=$line.split(" ")[0]
    $myTime=$line.split(" ")[1]
    ... etc
    to extract the first 6 fields in the line as they're all (more or less consistently) separated by spaces. For the final (message) field, I used your recommendation above.
    I redistributed the fields so they'd split at the spaces and I concatenated 
    01/31/2014 | 15:51:01 | USERH | <25000> | ERROR: 50000 - | SQLSTATE: 42000 - | Execute failed:  \n\nNo Lines entered. Must finish entering lines. on line 281522 in Quote Entry - Payable, Accounts.
    Thanks for pointing me in the right direction.

  • How to tail log files from particular string

    Hello,
    We would like to tail several log files "live" in powershell for particular string. We have tried to use "get-content" command but without luck because everytime as a result we received only results from one file. I assume that it was
    caused by "-wait" parameter. If there is any other way to tail multiple files ?
    Our sample script below
    dir d:\test\*.txt -include *.txt | Get-Content -Wait | select-string "windows" |ForEach-Object {Write-EventLog -LogName Application -Source "Application error" -EntryType information -EventId 999 -Message $_}
    Any help will be appreciated.
    Mac

    Because we want to capture particular string from that files. Application writes some string time to time and when the string appears we want to catch it and send an eventy to application log, after it our Nagios system will raise alarm.
    Mac
    Alright, this is my answer, but I think you won't like it.
    Run this PowerShell code in PowerShell ISE:
    $file1='C:\Temp\TFile1.txt'
    '' > $file1
    $file2='C:\Temp\TFile2.txt'
    '' > $file2
    $special='windowswodniw'
    $exit='exit'
    $sb1={
    gc $using:file1 -Wait | %{
    if($_-eq$using:exit){
    exit
    }else{
    sls $using:special -InputObject $_ -SimpleMatch
    } | %{
    Write-Host '(1) found special string: ' $_
    $sb2={
    gc $using:file2 -Wait | %{
    if($_-eq$using:exit){
    exit
    }else{
    sls $using:special -InputObject $_ -SimpleMatch
    } | %{
    Write-Host '(2) found special string: ' $_
    sajb $sb1
    sajb $sb2
    In this code, $file1 and 2 are the files being waited for.
    As I understood you, you care only for the special string, which is in the variable $special.
    All other variables, will be discarded.
    Also, whenever a string equals to $exit is written to the file, the start job corresponding to that file will be terminated, automatically! (simple, right?)
    In the example above, I use only 2 files (being watched) but you can extend it, easily, to any number (as long as you understand the code).
    If you are following my instructions, at this point you have PowerShell ISE  running, with 2 background jobs,
    waiting for data being inputed to $file1 and 2.
    Now, it's time to send data to $file1 and 2.
    Start PowerShell Console to send data to those files.
    From its command line, execute these commands:
    $file1 = 'C:\Temp\TFile1.txt'
    $file2='C:\Temp\TFile2.txt'
    $exit='exit'
    Notice that $file1 and 2 are exactly the same as those defined in P
    OWERSHELL ISE, and that I've defined the string that will terminate the background jobs.
    Command these commands in PowerShell Console:
    'more' >> $file1
    'less' >> $file1
    'more' >> $file2
    'less' >> $file2
    These commands will provoke no consequences, because these strings will be discarded (they do not contain the special string).
    Now, command these commands in PowerShell Console:
    'windowswodniw' >> $file1
    '1 windowswodniw 2' >> $file1
    'more windowswodniw less' >> $file1
    'windowswodniw' >> $file2
    '1 windowswodniw 2' >> $file2
    'more windowswodniw less' >> $file2
    All these will be caugth by the (my) code, because they contain the special
    string.
    Now, let's finish the background jobs with these commands:
    $exit >> $file1
    $exit >> $file2
    The test I'm explaining, now is DONE, TERMINATED, FINISHED, COMPLETED, ...
    Time to get back to PowerShell ISE.
    You'll notice that it printed out this (right at the beginning):
    Id Name PSJobTypeName State HasMoreData Location Command
    1 Job1 BackgroundJob Running True localhost ...
    2 Job2 BackgroundJob Running True localhost ...
    At PowerShell ISE's console, type this:
              gjb
    And you'll see the ouput like:
    Id Name PSJobTypeName State HasMoreData Location Command
    1 Job1 BackgroundJob Completed True localhost ...
    2 Job2 BackgroundJob Completed True localhost ...
              (  They are completed!  )
    Which means the background jobs are completed.
    See the background jobs' outputs, commanding this:
              gjb | rcjb
    The output, will be something like this:
    (1) found special string: windowswodniw
    (1) found special string: 1 windowswodniw 2
    (1) found special string: more windowswodniw less
    (2) found special string: windowswodniw
    (2) found special string: 1 windowswodniw 2
    (2) found special string: more windowswodniw less
    I hope you are able to understand all this (the rubbishell coders, surely, are not).
    In my examples, the strings caught are written to host's console, but you can change it to do anything you want.
    P.S.: I'm using PowerShell, but I'm pretty sure you can use older PowerShell ( version 3 ). Anything less, is not PowerShell anymore. We can call it RubbiShell.

  • How to get result from another JSP file?

    I have to write a jsp (my.jsp) to get information from another jsp file (other.jsp).
    The other.jsp return integer value (0 / 1) so that user can tell if certain service is available or not. And in my.jsp I need to collect such result and other information from a text file to make up of a XML string.
    How can I call other.jsp to get the result? Thanks a lot.

    Hi, I think I didn't describe the problem clearly
    enough. In fact, there is a JSP file, and if our
    database is currently connected, the JSP will return
    value 1, otherwise, it will return 0. My java program
    need to get that result, and then form an XML string,
    and send the string back to the client. I'm just
    wonder how can I write such a program to read result
    from JSP file. Thanks a lot.Why is this function implemented as a JSP file? It should be implemented as a bean. It would be simple to get the information you require from that bean.

  • '-1' bytes in log file - iPlanet Web Proxy Server 3.6

    I'm running iPlanet Web Proxy Server 3.6, and getting strange results in log file using extended format. Where the number of bytes should be (c1 - the content-length sent to the client by the proxy).
    I regularly get a '-1' instead of the number of bytes. Anyone tell me where this is coming from and how to stop it?

    Someone in the Web Proxy Server forum might. I guess you accidentally posted in the Web Server forum. However, if your question is time- or business-critical, you should probably contact Sun directly: http://www.sun.com/support

  • Stale status of Redo log files...

    Hi ,
    Why in the following sql results , the log file situated in group#3 is not stale as it is not currently used.....????
    SQL> select * from v$log;
        GROUP#    THREAD#  SEQUENCE#      BYTES    MEMBERS ARCHIVED STATUS           FIRST_CHANGE# FIRST_TIME
             1          1        485   52428800          1 NO       CURRENT               14310098 17/05/2007
             2          1        483   52428800          1 NO       INACTIVE              14265944 16/05/2007
             3          1        484   52428800          1 NO       INACTIVE              14292318 17/05/2007
    SQL>
    SQL> select * from V$LOGFILE;
        GROUP# STATUS  TYPE    MEMBER                                                                           IS_RECOVERY_DEST_FILE
             3         ONLINE  C:\ORACLE\PRODUCT\10.2.0\ORADATA\EPESY\REDO03.LOG                                NO
             2 STALE   ONLINE  C:\ORACLE\PRODUCT\10.2.0\ORADATA\EPESY\REDO02.LOG                                NO
             1         ONLINE  C:\ORACLE\PRODUCT\10.2.0\ORADATA\EPESY\REDO01.LOG                                NONOTE : I use Oracle 10g v.2 on WinXp platform. The database is running in NOARCHIVELOG mode.
    Many thanks,
    Simon

    The normal status is null (File is in use).
    The stale status indicates a 'problem', as documentation part below:
    A redo log file becomes INVALID if the database cannot access it. It becomes STALE if the database suspects that it is not complete or correct. A stale log file becomes valid again the next time its group is made the active group.

  • Data Services 4.0 Designer. Job Execution but empty log file no matter what

    Hi all,
    am running DS 4.0. When i execute my batch_job via designer, log window pops up but is blank. i.e. cannot see any trace messages.
    doesn't matter if i select "Print all trace messages" in execution properties.
    Jobserver is running on a seperate server. The only thing i have locally is just my designer.
    if i log into the Data Services management console and select the job server, i can see trace and error logs from the job. So i guess what i need is for this stuff to show up in my designer?
    Did i miss a step somewhere?
    can't find anything in docs about this.
    thanks
    Edited by: Andrew Wangsanata on May 10, 2011 11:35 AM
    Added additional detail

    awesome. Thanks Manoj
    I found the log file. in it relevant lines for last job i ran are
    (14.0) 05-11-11 16:52:27 (2272:2472) JobServer:  Starting job with command line -PLocaleUTF8 -Utip_coo_ds_admin
                                                    -P+04000000001A030100100000328DE1B2EE700DEF1C33B1277BEAF1FCECF6A9E9B1DA41488E99DA88A384001AA3A9A82E94D2D9BCD2E48FE2068E59414B12E
                                                    48A70A91BCB  -ek********  -G"70dd304a_4918_4d50_bf06_f372fdbd9bb3" -r1000 -T1073745950  -ncollect_cache_stats
                                                    -nCollectCacheSize  -ClusterLevelJOB  -Cmxxx -CaDesigner -Cjxxx -Cp3500 -CtBatch  -LocaleGV
                                                    -BOESxxx.xxx.xxx.xxx -BOEAsecLDAP -BOEUi804716
                                                    -BOEP+04000000001A0301001000003F488EB2F5A1CAB2F098F72D7ED1B05E6B7C81A482A469790953383DD1CDA2C151790E451EF8DBC5241633C1CE01864D93
                                                    72DDA4D16B46E4C6AD -Sxxx.xxx.xxx -NMicrosoft_SQL_Server -Qlocal_repo  coo ds local
                                                    repo_azdzgq4dnuxbm4xeriey1_e" -l"C:\Program Files (x86)\SAP BusinessObjects\Data Services/log/js01/tip coo ds local
                                                    repo_azdzgq4dnuxbm4xeriey1_e/trace_05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3.txt" -z"C:\Program Files
                                                    (x86)\SAP BusinessObjects\Data Services/log/js01/tip coo ds local
                                                    repo_azdzgq4dnuxbm4xeriey1_e/error_05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3.txt" -w"C:\Program Files
                                                    (x86)\SAP BusinessObjects\Data Services/log/js01/tip coo ds local
                                                    repo_azdzgq4dnuxbm4xeriey1_e/monitor_05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3.txt" -Dt05_11_2011_16_52_27_9
                                                    (BODI-850052)
    (14.0) 05-11-11 16:52:27 (2272:2472) JobServer:  StartJob : Job '05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3' with pid '148' is kicked off
                                                    (BODI-850048)
    (14.0) 05-11-11 16:52:28 (2272:2072) JobServer:  Sending notification to <inet:10.165.218.xxx:56511> with message type <4> (BODI-850170)
    (14.0) 05-11-11 16:52:28 (2272:2472) JobServer:  AddChangeInterest: log change interests for <05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3> from client
                                                    <inet:10.165.218.xxx:56511>. (BODI-850003)
    (14.0) 05-11-11 17:02:32 (2272:2472) JobServer:  RemoveChangeInterest: log change interests for <05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3> from client
                                                    <inet:10.165.218.xxx:56511>. (BODI-850003)
    (14.0) 05-11-11 19:57:45 (2272:2468) JobServer:  GetRunningJobs() success. (BODI-850058)
    (14.0) 05-11-11 19:57:45 (2272:2468) JobServer:  PutLastJobs Success.  (BODI-850001)
    (14.0) 05-11-11 19:57:45 (2272:2072) JobServer:  Sending notification to <inet:10.165.218.xxx:56511> with message type <5> (BODI-850170)
    (14.0) 05-11-11 19:57:45 (2272:2472) JobServer:  GetHistoricalLogStatus()  Success. 05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3 (BODI-850001)
    (14.0) 05-11-11 19:57:45 (2272:2472) JobServer:  GetHistoricalLogStatus()  Success. 05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3 (BODI-850001)
    it does not look like i have any errors with respect to connectivity? ( or any errors at all....)
    Please advise on what, if anything you notice from log file and/or next steps i can take.
    thanks.

Maybe you are looking for