SharePoint writer metadata information in a SharePoint farm with multiple WFE servers.

I  am working on Microsoft Volume Shadow Copy Service (VSS) framework. I know that in a 3-Tier SharePoint environment the SharePoint writer metadata on WFE server gives all the information related to that farm.
My question is -
a) How would i get all the information related to the SharePoint farm from the SharePoint writer metadata in an environment where multiple WFE servers are configured?
b) Is it possible that in a SharePoint farm where multiple WFE servers are deployed, SP writer of only one WFE server (Master/Main server) contains all the information about the respective SP farm servers in its metadata?

Hi  Aaditya,
All Writer Metadata is stored in Writer Metadata Document which is produced by writer. The backup application uses the  Writer Metadata document to get information about that writer, the data it owns,
and how to restore that data. Once the writer produces it, the Writer Metadata Document is a read-only document to the backup application.
The Writer Metadata Document contains three sets of data: writer identification and classification information, writer-level specifications, and component data.
For getting Writer Metadata, you can use
IVssBackupComponents::GetWriterMetadata method.
For more  information, you can refer to the articles:
http://msdn.microsoft.com/en-us/library/aa384992(v=vs.85).aspx
http://msdn.microsoft.com/en-us/library/aa384996(v=vs.85).aspx
http://blogs.technet.com/b/dpm/archive/2011/06/02/explaining-sharepoint-data-source-enumeration-with-data-protection-manager-2010.aspx
Best Regards,
Eric
Eric Tao
TechNet Community Support

Similar Messages

  • How to achieve no-downtime solution deployment on farms with multiple WFEs and LB

    Taking SharePoint Solution Deployer, my opensource PowerShell deployment script, to the next level,
    Bill Simser got me the idea of making the deployment even more smooth on farms with multiple WFEs and load balancer in order to achieve a no-downtime deployment
    The basic idea is to deploy the solutions on each WFEs one-by-one by
    1. Taking one WFE offline
    2. Installing the solution with the -local switch
    //Solution deployment
    Install-SPSolution -Identity <solutionname>.wsp –GACDeployment –CASPolicies –Local
    // Solution upgrade
    Update-SPSolution -Identity <solutionname>.wsp -LiteralPath LocalPathOfTheSolution.wsp -GacDeployment -Local
    3. Run post-deployment actions on the WFE (ie. restart services, recycle apppools or IIS reset, warmup server), which my script already does for each server
    4. Take WFE online again
    5. Repeat step 1-4 for all other WFEs
    I am struggling with three things here:
    1. The whole deployment process could be quite risky when something goes wrong in between. And in order to roll back I would require the original solution if it was already deployed before (which I can back up of course before I replace
    it)
    Anything which involves changing the content dbs should of course be done after the solutions is deployed to the whole farm, so this should not hurt in this case.
    Anyway MSDN says that the "DeployLocal" method (which I assume is the same as the -local switch in PS ) should be only used
    for
    troubleshooting purposes.
    So it would be great to hear about anyones experiences with it
    2. As there can be different types of load balancers (hardware, software) which might not be configurable through my script I assume that taking out the WFE from the the load balancer may not always be possible.
    So I thought about just taking the server offline.
    I haven't found an option yet to take only one server in the farm offline (without removing it from the farm of course), so maybe I miss something. Any ideas?
    3. Before taking a single WFE offline, I would like to assure that this server does not have any open sessions, operations of users ongoing. Unfortunately I found only the possibility to quiesce the whole farm, but not a single
    server. Am I missing something?
    Appreciate any ideas which might point me in the direction to solve the overall goal!
    SharePoint Architect, Speaker, MCP, MCPD, MCITP, MCSA, MCTS, Scrum Master/Product Owner
    Blog: www.matthiaseinig.de, Twitter:
    @mattein
    CodePlex: SharePoint Software Factory,
    SharePoint Solution Deployer

    Hi Mike, 
    unfortunately not. I tried several different approaches but didn't really success reliably with any of them. So eventually I gave up on it.
    Interesting idea though that Eric Hasley is commenting on the blog post you mentioned.
    "There is another approach that has worked for me in the past.  Because the deployment to each server is handled through a timer job,
    by stopping the timer service in a controlled fashion you can rollout your solution without incurring any user outage."
    It could work like that (in theory).
    Stop the SPTimerV4 on all servers in the farm apart from one.
    Take out the one to deploy to from the NLB
    Wait until it has no connections
    Deploy the solutions on it in the ordinary way (eg. with my
    SharePoint Solution Deployer ;))
    Put it back into the NLB and take the others out
    Wait until they have no connections left
    Activate the timer service on the others servers and let them deploy
    Put them back into the NLB
    No clue if this is actually working and you still have the problem with the NLB, so it could take a while.
    Also I am not certain what happens in state 5 if users use different versions of your solutions at the same time (old version on the remaining open connections, new version on the updated server)
    I do not have a suitable farm at hand to play with it though, so can't test it.
    Cheers
    Matthias
    Matthias Einig, CEO, SharePoint MVP
    Blog: www.matthiaseinig.de, Twitter:
    @mattein
    Projects: SharePoint Code Analysis Framework (SPCAF),SharePoint Code Check (SPCop),
    SharePoint Software Factory,
    SharePoint Solution Deployer

  • Does cancelling the SharePoint Config wizard break a farm with multiple servers?

    I''m having a discussion about the impact of the accidental cancellation of a started SP Config wizard. The wizard was cancelled almost immediately after incidentally clicking on it, only few websites where stopped.
    Logs did not confirm any damage to the server (no errors logged related to defects shortly after wizard was cancelled) it was started on and Central Admin doesn't complain about any inconsistencies either. Colleagues say environment should be corrupt and
    opts for full restore of the environment. 
    This conclusion to me sounds premature and extreme, by default why should the Config wizard break things just by starting it? Can anyone confirm this is the case or why this could be a possibility? I do not want to restore the entire farm based on assuming
    the worst. 
    Logs don't confirm an invalid state of the environment, so what is his theory based on? Can any specialist shed light on the impact of running and cancelling the wizard on a server? Is he right and is the only option in this scenario a full database restore?
    Is this a known issue or known behaviour of the config wizard?? Doesnt sound logical to me 
    [edit] Hope the moderator can forgive me and move this topic to General, my apologies for making this mistake

    Thanks Rana, that s what my common sense also thinks about this situation. Rerunning the SPCW in such situation in my opinion would normally just complete successfully with no issues. I wanted to confirm here if the latter is normal behavior, since
    my colleague says in general cancelling the SPCW breaks SharePoint by default I was kinda surprised.
    In my own experience, if I have an "error free" environment and I run the SPCW and cancel it, rerunning the wizard will just finish the SPCW processes without any problem since nothing has changed in the binaries etc.
    I know there is a possibility for the wizard to break things in scenario's where changes have been made to the environment (changes always oppose risks), but this isn't the case.

  • How to write to one channel of a task with multiple channels? (plus other things...)

    So I have a USB-6009 DAQ.  It has 12 digital output lines.  I want one channel that is "Dev0\line0:10" which represents an address bus in my application.  And a second channel "Dev0\line11" which represents a program enable line in my application.
    I have tried creating two different tasks and adding one channel to one task and the other channel to the other task.  The only task that worked was the task with "Dev0\line0:10".  It was always the task containing that channel, regardless of the order of creation.  So then I moved on to a different method.  (I read somewhere that I should only create one task of each type.  like only 1 DO task only 1 AO task etc...  However I am also using the two analog ouputs to and have a task for each AO and they work just fine.)
    I tried add both channels to one task.  But when I needed to control only the address bus, I had to have some information for the other channel as well.  This was a little trying, but I could configure it that way.  But it turned out to be easier for me to just make one channel with all the lines and OR in the data for line11 with each write.
    I just wondered if it was possible to write to one particular channel of a task and not the other channel?  That would really be the ideal solution for me.  especially if I could write multiple samples to the one channel while I left the other channel alone.  Which brings up another complaint... Why does WriteMultiSamplePort only work if I use a DigitalMultiChannelWriter, and not if I use a DigitalSingleChannel writer even though my task only has one channel (and by the way I set up the channel as one channel for all lines)?  A perplexing issue to be sure.
    And no I can't just load all my samples into an array and write them all at once because I also have to manipulate the two analog outputs in between the various digital writes.
    I am using NI DAQmx 7.5 and C#.  I am trying to use the DAQ to program a digital switch, which has proven to be a real challenge.  In push-pull mode there is too much ringing for the switches programming port to tolerate.  But the switches interface is LVTTL, so I needed the 3.3V.  When I changed to open-collector I had to use voltage dividers to drop down to 3.3V.  But the rise time using open-collector is too slow for me the program the switch in serial mode, so I had to change to parallel.  The switch has an 11 bit multi-plexed ADDR/DATA bus.  So the DAQ I had chosen to use which had plenty of lines for the serial programming, now is strained to it's absolute limits by the parrallel interface.   ARGH.  The only output I am not currently using is the counter, and I'm going to need it if I ever want to read back from the switch.  But first I have to seperate the ADDR/DATA bus from the CS line on the DIO lines of the DAQ.  And I don't know for sure what I'm going to do about the voltage level translation when I have to go bi-directional.  Maybe I can filter out the ringing in push-pull mode?  Any thoughts on that?

    Hi Saikey,
    In most cases, you are exactly right: you can only use one task for one type of operation (i.e. only one analog input task in the same program). With the USB 6009, you can have multiple digital output tasks running at the same time. I was able to run a digital output program with two different digital output tasks configured for a USB-6009.
    However, you stated that it would be better if you had everything in a single task for your application and write data to only some of the channels. The easiest way to do this is to modify your array of output data so that only the data to that one channel is changing. So, for example, if you keep writing a 0 to the channels that do not need new data nothing will change.
    If you have to change your analog outputs during this program, you could create an event structure that would stop and restart the analog output tasks without changing the digital output data. I hope that you find this information helpful.
    Regards,
    Hal L.

  • Any ideas on a cleaner way to write this? - run through a function with multiple steps; if any step fails, restart the function from the begining

    Basically there are 3 different functions for various "tests"; they each return "1" if the test passes and "0" on failure.  A "master" function calls these 3 functions in the desired order and keeps a counter
    of tests that pass.  The counter has to be 1 to proceed to step 2, has to be 2 to proceed to step 3, and when it's 3 the loop closes.
    This approach is giving the expected results, but I was wondering if anyone has ideas on a cleaner approach?  What I really wanted to accomplish was to have the 3 tests past in succession and to provide an overall pass and proceed to the next step;
    and also to exit the function if any step fails rather than running through each step before trying again.  If the script were to for example loop through the 3 tests and wait until each one passes, tests 1 and 2 might pass, but then 3 might fail
    a few times and finally pass before the script exits.  I did not want to allow for that scenario to count as an overall pass.
    Function Test1 {
    If PASS Return 1
    If FAIL Return 0
    Function Test2 {
    If PASS Return 1
    If FAIL Return 0
    Function Test3 {
    If PASS Return 1
    If FAIL Return 0
    Function TestAll
    [int]$counter = 0
    $Check1 = Test1
    If ($check1 -eq 1) {$counter++}
    If ($counter -lt 1) {Exit}
    Start-Sleep -s 15
    $check2 = Test2
    If ($check2 -eq 1) {$counter++}
    If ($counter -lt 2) {Exit}
    Start-Sleep -s 15
    $check3 = Test3
    If ($check3 -eq 1) {$counter++}
    If ($counter -lt 3) {Exit}
    Return $counter
    Do {$STATUSCOUNT = TestAll}
    While ($STATUSCOUNT -lt "3")

    This is cleaner:
    Function Test1 {
    If PASS Return 1
    If FAIL Return 0
    Function Test2 {
    If PASS Return 1
    If FAIL Return 0
    Function Test3 {
    If PASS Return 1
    If FAIL Return 0
    Function TestAll{
    while($true){
    while($true){
    $counter=0
    if(($counter+=Test1) -ne 1){break}
    if(($counter+=Test2) -ne 2){break}
    if(($counter+=Test2) -ne 3){break}
    return
    Write-Host ('try again' + $counter) -fore
    TestAll
    Of course all of your test functions have syntax errors.
    \_(ツ)_/

  • CSM health probe for server farm with multiple vservers

    Is there a way to specify the vserver port that a health probe monitors when multiple vservers are configured for the same serverfarm? Let's say I have a serverfarm named farm1. farm1 services two ports www and https so two vservers vserver_www and vserver_https are configured and bound to farm1. I would like to enable http health probe on farm1 with the intention of only monitoring vserver_www http port but, instead, the health probe monitors both www and https and since a http probe on https fails it takes farm1 reals and both vservers vserver_www and vserver_https out-of-service. Is there a way to configure a health probe to monitor a specific port? Or, should I create two duplicate serverfarms farm1 bound to vserver_www and farm2 bound to vserver_https and only enable http health probe on farm1? Any other ideas welcomed.

    Appreciate the feedback. I also found what I was looking for in configuration examples. To summarize I've borrowed the comment from the URL below:
    # The port for the probe is inherited from the vservers.
    # The port is necessary in this case, since the same farm
    # is serving a vserver on port 80 and one on port 23.
    # If the "port 80" parameter is removed, the HTTP probe
    # will be sent out on both ports 80 and 23, thus failing
    # on port 23 which does not serve HTTP requests.
    http://www.cisco.com/univercd/cc/td/doc/product/lan/cat6000/mod_icn/csm/csm_4_2/config/cfgxpls.htm

  • Backup Sharepoint 2013 SP1 Farm with SQL 2014 RTM "Always On" using System Center 2012 R2 Data Protection Manager

    Is backing up and Restoring SharePoint 2013 SP1 Farm with SQL 2014 RTM  "Always On" High Availability now supported using "System Center 2012 R2 Data Protection Manager"? 
    I cannot find information anywhere.
    Regards,
    Igor

    This is a DPM supportability issue, I believe. Last I heard, no it was not supported. SharePoint 2013 does not support SQL 2014 until the April 2014 CU. The CU should be out soon, although it appears to have been delayed (usually comes out on Patch Tuesday,
    which was this past Tuesday).
    Trevor Seward
    Follow or contact me at...
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Sharepoint 2013 Foundation three tier farm with two Webservers in NLB

    Heloo,
    I have been strugling with a problem the last htree days.
    I have instelled and configured a sharepoint 2013 three tier farm with Sharepoint 2013 Foundation and MS SQL 2014 Express. This is a Test Farm and all the servers are Windows 2012 R2.
    I have one SQL Server, one Application Server and two Webservers. The tow web servers are configured with Multicasting NLB. The NLB name is "sharepoint.ws.domain.net". The IP of the NLB is also in our DNS Zone.I have made a Web Application with
    the name "sharepoint.ws.domain.net" on port 80 (NLB name) and a Site collection with the same name.
    Now whene I am working on the Sharepoint Site I get very offen a login Window or I get the message "An error occurred while processing the request on the server. The status code returned from the server was: 0".
    The error "An error occurred while processing the request on the server. The status code returned from the server was: 0" comes when I try to create a sub Site (most with no Permissions inheritance)... but not allways. I also get  sometimes
    the same message when I upload files (MS Office documents and PDF files).
    The login Windows comes whene I am navigating throw the Sites... but also not allways.I go to the Site with an IE11 and the Site is also in the Intranet security sites.
    Can you help me on this one...
    Kind Regards
    Ioannis Kyriakidis

    With no hostname on the Web Application, you have to create Host-named Site Collections. So that complicates things a bit.
    As far as NLB setup, you create Web Applications the same way you would otherwise. NLB is simply installed on both Web Servers and placed into the NLB VIP (virtual IP). The DNS A record points at the VIP.
    Also set up your Windows NLB using Unicast instead of Multicast. If you have certain types of switches that block unicast ARP from multiple clients, e.g. Cisco, you may have to make an exception for them (e.g. http://www.cisco.com/c/en/us/support/docs/switches/catalyst-6500-series-switches/107995-microsoft-nlb.html).
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • How can I copy documents from a Sharepoint On Premises library to a Sharepoint Online library and at the same time preserving their metadata?

    How can I copy documents from a Sharepoint On Premises library to a Sharepoint Online library and at the same time preserving their metadata?
    I use the Open Explorer Windows to drag and drop the files, but the metadata are not copied. Thanks.

    To maintain the metadata you'll need to use one of the third party tools that does this kind of migration.  Metalogix has a product with a free trial that we have used before.  (Don't remember whether the free version maintains metadata or not).
     You can read about it here:
    http://www.metalogix.com/Products/Content-Matrix.aspx
    Paul Stork SharePoint Server MVP
    Principal Architect: Blue Chip Consulting Group
    Blog: http://dontpapanic.com/blog
    Twitter: Follow @pstork
    Please remember to mark your question as "answered" if this solves your problem.

  • Backup Sharepoint 2013 Farm with SQL 2012 "Always On" using System Center 2012 R2 Data Protection Manager

    Is backing up and Restoring SharePoint 2013 Farm with SQL 2012  "Always On" High Availability now supported using "System Center 2012 R2 Data Protection Manager"?
    I cannot find confirmation anywhere.
    Regards,
    John

    Per this thread
    http://social.technet.microsoft.com/Forums/en-US/0c047737-4733-4ad5-a24d-3e6e6ff42f70/dpm-2012-sp1-and-sharepoint-2013-on-a-sql-2012-alwayson-ag?forum=dpmsharepointbackup, no it does not look like this is supported.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Sharepoint search with multiple managed metadata terms

    How do we search multiple managed metadata terms programmatically? I tried the following but did not get any results. The below url does not yield any results.
    http://app-0efff1c35fb5bc.xxxxxxx.com/sites/test/webpart/_api/search/query?querytext=%27owstaxIdDocumentx0020Type:Checklist;Intel%27&selectproperties=%27Path,Title,Author,LastModifiedTime&rowlimit=500&trimduplicates=false&enablequeryrules=false
    owstaxIdDocumentx0020Type:Checklist;Intel
    The results exist when the querytext is changed to owstaxIdDocumentx0020Type:Checklist. How do we perform an "AND" operation with multiple terms?
    V

    Have you tried:
    owstaxIdDocumentx0020Type:Checklist AND owstaxIdDocumentx0020Type:Intel
    Blog | SharePoint Field Notes Dev Tools |
    SPFastDeploy | SPRemoteAPIExplorer

  • SharePoint Multi-Server Farm with LB and SQL Cluster.

    Helllo,
    We are setting up a Multi Server SharePoint 2013 Farm with a Load-Balancer. 
    We are also using SQL Server 2012, Two Node Cluster. 
    We have SQL Server Instance Running as a Service. Also I managed to install PowerPivot for SharePoint as a Instance. 
    Looking at various article's I have question if this Cluster Setup supported in SharePoint 2013. 
    Can the SharePoint Farm use this PowerPivot Instance as a failover service? 
    Or do I have to install Analysis services separately on Both SQL Nodes? 
    If so how do I configure Excel Services on SharePoint? 
    Thank you
    Sham

    PowerPivot is installed on the SharePoint server and is a scale-out service (you just install it on multiple SharePoint servers and make sure the Service Instance is started in SharePoint). Same with Excel Calc Services (but for this one you don't have
    to install anything, just start it).
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Retrieving Sharepoint List Item Information for SAP Best Approach

    Hi
    We have a request for retrieving Sharepoint list item information to create DVS objects in SAP. Has anybody been doing something like that, getting information out of Sharepoint into SAP? What would the best approach for this be, as usually implementations are the other way around, i.e. getting data out of SAP into Sharepoint. I think that Duet Enterprise would be an overkill for this and also requires additional licenses.
    So I have been thinking using either a PI/Web Service using the Sharepoint WSDL to retrieve the information, or maybe there are also possibilities using WebDAV. There are some CL_HTTP_WEBDAV classes, but I am not sure whether and how they could be used for this.
    Does anybody have how-tos or examples for either approach? Feedback is much appreciated.
    Thanks,
    Daniel

    Without using a ridiculously long workflow (which even then may not work) I do not think this is feasible in a sharepoint list. A workflow can be used to alter other fields in an item but not incrementing items like you require. You would be better changing
    to data sheet view (quick edit in 2013) and changing them all manually then, or changing to terms such as priority high, low etc. Overall, no this is not really possible.
    Brendan Lee

  • UPRE to replicate User Profiles and Managed Metadata to High Availability backup farm

    I'm working on a HA project, and have been tasked with replicating our User Profiles and Metadata to a newly created farm. I'll call the current production farm FarmA and the new HA farm FarmB. I've installed SharePoint2010AdministrationToolkit on FarmB,
    and used SPDiag to generate my first report on FarmB.
    We're not using MySites.
    I'm looking at
    http://technet.microsoft.com/en-us/library/jj891109(v=office.15).aspx and have some questions. Anything I do on FarmA will be done thought a Change Control, and I want to be able to answer questions intelligently.
    Does SharePoint2010AdministrationToolkit and UPRE have to be installed on FarmA and FarmB?
    Does enabling DTC create any security concern? Or performance hit? I haven't done this before.
    Any documented risks with using the replicator?
    Are scheduled replications typically 1 weekly full with nightly incrementals?
    Anything the replication won't copy over to FarmB? Like Synchronization Connections, or additional/custom fields?
    How are GUIDs handled?
    Sorry to pack 6 questions into one post ;)
    Thanks,
    Scott

    To answer the first question this is what MS says: The User Profile Replication Engine can be installed on any computer that has access to the source User Profile service application and destination User Profile service application. However, we recommend
    that you install the User Profile Replication Engine on a computer that is part of the source farm or on a computer that is connected to the subnet of the source domain. The User Profile Replication Engine uses the SharePoint Server 2010 User Profile and User
    Profile Change Log to read and write data between user profile stores. http://technet.microsoft.com/en-us/library/cc663011(v=office.15).aspx And to answer 4th question its true but full sync will only be required initially and only when there is org changes
    most of the time its incremental. To my knowledge FULL replication will bring all the details but not sure though.

  • Sharepoint 2010 SQl 2008 r2 Reporting Services Integrated with Claims

    i have a 4 server sharepoint 2010 farm all servers are windows 2008 r2 x64
    two severs are sharepoint app/wfe servers
    one server is SQL sharepoint content db server sql 2008 r2 enterprise
    one server is SQL Claims Provider DB and reporting services on SQL 2008 r2
    the site is behind a forefront tmg proxy firewall for both internal and external users and both locations are producing the same results.
    the web application i am having an issue with consists of a primary web app claims enabled web app that only has NTLM enabled, with an extended extranet webapp that  supports claims and NTLM.
    there is surrently a site collection that has been configured for reporting services when i log into the site as a SQL Claims user i am able to run the reportbuilder as a content type  using the New Report builder report content type from a library on
    this site. Reportbuilder launches and i am once gain prompted to login i login with the same credentials i used to log into the site and reportbuilder opens up and then i am able to open datasourceas and reports.
    When i log in to the same site collection as a domain user i am able to log in to the site and launch reportbuilder. When reportbuilder opens up i am prompted to log in, i insert my domain\username and password and nothing happens the login prompt appears again
    blank on the back end it throws the following exceptions
    1. Password check on 'username' generated exception: 'System.ServiceModel.FaultException`1[Microsoft.IdentityModel.Tokens.FailedAuthenticationException]: The security token username and password could not be validated. (Fault Detail is equal to Microsoft.IdentityModel.Tokens.FailedAuthenticationException:
    The security token username and password could not be validated.).'.
    2.An exception occurred when trying to issue security token: The security token username and password could not be validated.
    any thoughts on where i should begin troubleshooting this.

    I think i am getting close i am now seeing a new error it seems that the sp farm account is unable my domain account when launching reportbuilder.
    vent code: 4006
    Event message: Membership credential verification failed.
    Event time: 4/6/2014 10:59:05 AM
    Event time (UTC): 4/6/2014 2:59:05 PM
    Event ID: 7640af98dab846829f78e7d74a4e1e8a
    Event sequence: 9
    Event occurrence: 8
    Event detail code: 0
    Application information:
        Application domain: /LM/W3SVC/2/ROOT/SecurityTokenServiceApplication-1-130412687342754349
        Trust level: Full
        Application Virtual Path: /SecurityTokenServiceApplication
        Application Path: C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\WebServices\SecurityToken\
        Machine name: (SP WFE MACHINE NAME)
    Process information:
        Process ID: 8044
        Process name: w3wp.exe
        Account name: (SP FARM ACCOUNT)
    Request information:
        Request URL:  
        Request path:  
        User host address:  
        User:  
        Is authenticated: False
        Authentication Type:  
        Thread account name: (SP FARM ACCOUNT)
    Name to authenticate: (doamin account launching reportbuilder)
    Custom event details:

Maybe you are looking for