FIM reporting -- Run FIMPostInstallScirptsForDatawarehouse.ps1 script

 We have 3 servers and
1. Server 1 -- FIM Service
2. Server 2 -- Service Manager server + Sql Server 2008 r2 with (ServiceManager DB on instance 1 +DWStagingAndConfig, DWRepository, and DWDataMart DB on instance 2)
3. Server 3 -- Data warehouse server
We have installed FIM reporting and MPSyncJob is successfully over. Next step is to run FIMPostInstallScirptsForDatawarehouse.ps1 on Data warehouse server but The FIM Reporting post installation scripts required to execute
.\FIMPostInstallScriptsForDataWarehouse.ps1
scripts in the Data Warehouse server. However, this script requires access to the "SQLCmd" tools and the "SMCmdletSnapIn" snapins. These two components are both present
when the SQL server resides in the Data Warehouse server. In this scenario that condition is not true. In this procedure, we will run the FIM post installation powershell script in the SQL server. We will create a PSSessionConfiguration in the Data Warehouse
server that will get called from the remote SQL server to execute the "SMCmdletSnapIn".
To run the script i was following the steps on this link social.technet.microsoft.com/wiki/contents/articles/17916.troubleshooting-fim-install-fim-data-warehouse-support-scripts-on-a-remote-sql-server.aspx
But in creating PSSession i am getting Access Denied error .
So is it possible, if i will install SQL server management studio on Data Warehouse server as i will get SQLCmdlets of powershell in Data Warehouse server so i can run  the script directly on Data warehouse server without creating PSSession
Will it work ??

You can download just the needed pieces from
http://www.microsoft.com/en-us/download/details.aspx?id=16978.
Thanks, Brian
I think Brian wanted to paste the link without dot at the end ;)
http://www.microsoft.com/en-us/download/details.aspx?id=16978
If you found my post helpful, please give it a Helpful vote. If it answered your question, remember to mark it as an Answer.

Similar Messages

  • [FIM Reporting] Start-FIMReportingIncrementalSync.ps1 fails

    I am deploying FIM 2010 R2 SP1 Reporting on a test environment. However, in the post installation phase, the Start-FIMReportingIncrementalSync.ps1 script is failing with the following error (the Start-FIMReportingInitialSync completed successfully though).
    Any insight on what's causing this and how to resolve it?
    Import-FIMConfig : Failure when making web service call.
    SourceObjectID = ff1315de-ed7c-4b0f-90b4-036f8f983faa
    Error = The web service client has encountered the following class of error: SystemConstraint
    Details: Failed Attributes:
    Additional Text Details: The Request contains changes that violate system constraints.
    Correlation Identifier: 2fcd66be-c0ba-41ff-8019-8210cb1f21b5
    Failure Message:
    Request Identifier:
    At C:\Program Files\Microsoft Forefront Identity Manager\2010\Reporting\PowerShell\Start-FIMReportingIncrementalSync.ps
    1:46 char:47
    +     $undone = $importObject | Import-FIMConfig <<<<  -uri $uri;
        + CategoryInfo          : InvalidOperation: (:) [Import-FIMConfig], InvalidOperationException
        + FullyQualifiedErrorId : ImportConfig,Microsoft.ResourceManagement.Automation.ImportConfig
    Thanks,
    John

    Hi.
    At first, please check if another one is not running. You can verify this by entering FIM Portal -> Administration -> All Resources -> Reporting
    Job
    Then, find the last one and check if it is not still running. FIM would not allow you to create new reporting job if so.
    I don't know the way to stop existing reporting job (other than wait for it to complete/timeout/error), but you'd probably find if there is any existing one.
    Keep trying

  • FIM Reporting initial sync running long time

    We have installed FIM Reporting last month , afterwards our FIM reporting initial sync powershell script is running from last month to Sync FIM data with SCSM server , but still only half data is synced with SCSM server ,
    we have 4 server
    1. Server1 --FIM server
    2. Server2 -- FIM database
    2. Server3 -- SCSM service manager  and SQL server Databases for Service manager and Data warehouse
    4. Server4 -- Data warehouse server
    As FIM initial sync script is taking long time to sync so we found some errors in Server 4 (Data warehouse server) multiple times during ETL jobs running.
    Error screen shot
    Please give some suggestions to make FIM initial sync faster

    Please try Resume-FIMReportingInitialSync instead of starting initial sync once again.
    Sadly Initial Sync is very long and even on test environments it can take a long time.
    But it seems that data from FIM was successfully moved to reporting database, but it hasn't been calculated by Service Manager correctly between its databases. 
    If you found my post helpful, please give it a Helpful vote. If it answered your question, remember to mark it as an Answer.

  • Exchange 2013 mailbox auditing command with showdetails parameter in ps1 script is not working via task scheduler

     
    Hi All ,
    In my environment we are having exchange 2013 enterprise edition with SP1 which is installed in windows server 2012 standard edition.
    We have enabled mailbox auditing for few mailboxes and also we have made simple powershell script with only the below mentioned commands .when i run the  ps1 script  in exchange management shell ,i can able to get the relevant output.
    CMDLETS in powershell script :
    Search-MailboxAuditLog -StartDate ((Get-Date).AddHours(-24)) -EndDate (Get-Date) -showdetails | fl >e:\output.txt
    Note : we are having only the above commands in ps1 script , apart from that we don't have anything in it .
    Sametime i have scheduled the same powershell script via task scheduler .But i cannot able to get the valid output ,instead of that i was getting a blank output file with no data in it . 
    Steps handled on my side to run the powershell script in task scheduler: 
    1.when i remove the parameter showdetails in the ps1 script ,i can able to get the output in the txt file .But in my scenario showdetails is the only parameter which will brought me more and in depth details about mailbox auditing.
    The Difference what i have seen between exchange 2010 and exchange 2013 
    when in run the same powershell script via task scheduler in exchange 2010 enterprise environment installed in windows server 2008 r2 enterprise OS, i can able able to get the proper output without removing the showdetails parameter .
    I am using the below methods to run the ps1 file via task scheduler in exchange 2013 environment .
    program/script : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe
    Add arguments : -PSConsoleFile "E:\Program Files\Microsoft\Exchange Server\V15\Bin\exshell.psc1" -Command ". 'C:\scripts\MailboxAuditReport\test.ps1'"
    I have mentioned the error below and that is the one what i have faced, when i try to run the PS1 script directly in windows powershell and not in exchange management shell .
    Error message : "the requesting account does not have permission to access the audit log"
    Please help me out to resolve this case .
    Thanks 
    S.Nithyanandham

    Hi All ,
    In my environment we are having exchange 2013 enterprise edition with SP1 which is installed in windows server 2012 standard edition.
    We have enabled mailbox auditing for few mailboxes and also we have made simple powershell script with only the below mentioned commands .when i run the  ps1 script  in
    exchange management shell,i can able to get the relevant output.
    CMDLETS in powershell script :
    Search-MailboxAuditLog -StartDate ((Get-Date).AddHours(-24)) -EndDate (Get-Date) -showdetails | fl >e:\output.txt
    Note : we are having only the above commands in ps1
    script , apart from that we don't have anything in it .
    In case, if i have scheduled the same powershell script via task scheduler .But i cannot able to get the valid output ,instead of that i was getting a blank output file with no data in it . 
    Steps
    handled on my side to run the powershell script in task scheduler: 
    1.when i remove the parameter showdetails
    in the ps1 script ,i can able to get the output in the txt file .But in my scenario showdetails is the only parameter which will brought me more and in depth details about mailbox auditing.
    The
    Difference what i have seen between exchange 2010 and exchange 2013 
    when in run the same powershell script via task scheduler in exchange 2010 enterprise environment installed in windows server 2008 r2 enterprise OS, i can able able to get the proper output without removing the showdetails parameter .
    I
    am using the below methods to run the ps1 file via task scheduler in exchange 2013 environment .
    program/script : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe
    Add arguments : -PSConsoleFile "E:\Program
    Files\Microsoft\Exchange Server\V15\Bin\exshell.psc1" -Command ". 'C:\scripts\MailboxAuditReport\test.ps1'"
    I have mentioned the error below and that is the one what i have faced, when i try to run the PS1 script directly in windows powershell and not in exchange management shell .
    Error message : "the requesting account does
    not have permission to access the audit log"
    Please help me out to resolve this case .
    Thanks 
    S.Nithyanandham

  • Manually running a .ps1 in an administrator (elevated) shell

    I have a seemingly straightforward question: How can I manually run a .ps1 script on Server 2012 R2 and have it open in an administrator elevated shell?  I am right clicking and clicking "Run with Powershell" on the .ps1 file.
    My environment:
    Two Server 2012 R2 machines in the same domain in the same OU.  Both are full GUI installs. Both have UAC set to "default".
    The discrepancy:
    One of the servers will run any and all .ps1 files in an administrator elevated shell.  The other server will run any and all .ps1 files in a non-administrator, standard shell.  I have no idea what the differences are between the two servers.  Neither
    are running any custom Powershell profiles.
    The following registry keys are all identical between the two servers:
    HKEY_CLASSES_ROOT\Microsoft.PowerShellCmdletDefinitionXML.1
    HKEY_CLASSES_ROOT\Microsoft.PowerShellConsole.1
    HKEY_CLASSES_ROOT\Microsoft.PowerShellData.1
    HKEY_CLASSES_ROOT\Microsoft.PowerShellModule.1
    HKEY_CLASSES_ROOT\Microsoft.PowerShellScript.1
    HKEY_CLASSES_ROOT\Microsoft.PowerShellSessionConfiguration.1
    HKEY_CLASSES_ROOT\Microsoft.PowerShellXMLData.1
    What am I missing?

    Thanks for the fast response, Bill.
    1. Correct. Of course, both have UAC enabled and set identically at "default". FYI, disabling UAC in server 2008 and 2012 is not apples to apples: http://social.technet.microsoft.com/wiki/contents/articles/13953.windows-server-2012-deactivating-uac.aspx
    2. BINGO!! This was set to "enabled" on the server that I could not get to run the .ps1 files in an elevated prompt.  It was set to "disabled" on the server that would run .ps1 files in an elevated shell. After I changed it to "disabled"
    on the "broken" server, it now runs .ps1 files in an elevated shell. (we do have tight change control, so I will use this method to run elevated Powershell scripts on this specific server).
    3. Both servers had already been rebooted.
    Additional information: I am logged in as the same domain administrator account on both machines.
    Thanks, Bill!

  • FIM Reporting ETLScript PowerShell Script for SCSM 2012?

    Hi,
    The FIM Reporting Deployment Guide is great, however on a few occasions it forgets to mention where you meant to execute things (http://technet.microsoft.com/en-us/library/jj133855(v=ws.10).aspx) .
    For example, if it wasn't for the screenshot in the article, we would not have known that we need to run the ETLScript from the FIM Service/Portal server.
    Everything until the ETLScript has thus far worked; and we have deployed the Service Manager 2012 console on the FIM Service/Portal server (since we are using SCSM 2012 for FIM Reporting).
    However, it appears that the ETLScript (in the deployment guide) has been written for SCSM 2010.
    So, has Microsoft or anyone published an updated SCSM 2012 ETLScript script?
    Thanks,
    SK

    Could this be it?
    http://gallery.technet.microsoft.com/PowerShell-Script-to-Run-a4a2081c

  • Unable to run FIMPostInstallScriptsForDataWarehouse for FIM Reporting

    Hi Everyone,
    I am configuring FIM Reporting in which initially I installed scsm 2012 r2 which was not supported and after the uninstallation I installed scsm 2012 sp1,after the installation of scsm management server 2012 sp1 and dataware house when I am running the "FIMPostInstallScriptsForDataWarehouse"
    script it says the following error message in the first screen and when I am trying to install the snappins then it says me to delete this existing files in the second screen.
    My question is how and where should go I go and delete that exisiting snappins so that the script can run. please find the below screen shots
    +

    Hi,
    Can you check payroll status of the mentioned employees in PA30. (IT0003)
    See whether employee number has been locked for payroll or there are dates in the fields for retroactive calculation etc.
    Regards;
    Okan

  • Power Shell Script failed to run - GetMGAlertsCount.ps1

    Our SCOM 2012 R2 is getting the following script error every 45 minutes...
    The PowerShell script failed with below exception
    System.Management.Automation.PropertyNotFoundException: The property 'Name' cannot be found on this object. Verify that the property exists.At line:57 char:3
    + $firstLvlClass = Get-SCOMClass -Id $firstLvlMember.Name
    + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    at CallSite.Target(Closure , CallSite , Object )
    at System.Dynamic.UpdateDelegates.UpdateAndExecute1[T0,TRet](CallSite site, T0 arg0)
    at CallSite.Target(Closure , CallSite , Object )
    at System.Management.Automation.Interpreter.DynamicInstruction`2.Run(InterpretedFrame frame)
    at System.Management.Automation.Interpreter.EnterTryCatchFinallyInstruction.Run(InterpretedFrame frame)
    at System.Management.Automation.Interpreter.EnterTryCatchFinallyInstruction.Run(InterpretedFrame frame)
    Script Name: GetMGAlertsCount.ps1
    One or more workflows were affected by this.
    Workflow name: ManagementGroupCollectionAlertsCountRule
    Instance name: All Management Servers Resource Pool
    Instance ID: {4932D8F0-C8E2-2F4B-288E-3ED98A340B9F}
    Management group: NCA2
    On the management server seeing corresponding 22406 errors
    System
    Provider
    [ Name]
    Health Service Modules
    EventID
    22406
    [ Qualifiers]
    49152
    Level
    2
    Task
    0
    Keywords
    0x80000000000000
    TimeCreated
    [ SystemTime]
    2015-04-21T18:44:03.000000000Z
    EventRecordID
    1303964
    Channel
    Operations Manager
    Computer
    PKSWSM001.ad.nca.com
    Security
    EventData
    NCA2
    ManagementGroupCollectionAlertsCountRule
    All Management Servers Resource Pool
    {4932D8F0-C8E2-2F4B-288E-3ED98A340B9F}
    GetMGAlertsCount.ps1
    300
    System.Management.Automation.PropertyNotFoundException: The property 'Name' cannot be found on this object. Verify that the property exists.At line:57 char:3 + $firstLvlClass = Get-SCOMClass -Id $firstLvlMember.Name
    + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ at CallSite.Target(Closure , CallSite , Object ) at System.Dynamic.UpdateDelegates.UpdateAndExecute1[T0,TRet](CallSite site, T0 arg0) at CallSite.Target(Closure , CallSite , Object ) at System.Management.Automation.Interpreter.DynamicInstruction`2.Run(InterpretedFrame
    frame) at System.Management.Automation.Interpreter.EnterTryCatchFinallyInstruction.Run(InterpretedFrame frame) at System.Management.Automation.Interpreter.EnterTryCatchFinallyInstruction.Run(InterpretedFrame frame)
    System.Management.Automation.PropertyNotFoundException
    I'm at a bit of a loss to troubleshoot this error.  I cannot find the "GetMGAlertsCount.ps1"  script to execute manually and don't have much else I can find to troubleshoot.
    Thanks for your help!

    1) Please check Event Log on the management server and see if there are any related errors.
    Nothing
    interesting found
    2) On the management server, please ensure the following key exists:
    HKLM:\SOFTWARE\Microsoft\System Center Operations Manager\12\Setup\Powershell\V2
    Two entries found underneath this key;
    (Default) (value not set)
    InstallDirectory C:\Program Files\Microsoft System Center 2012 R2\Operations Manager\Powershell\
    3) Action Account has access to the resources used by the PowerShell script.
    is LocalSystem
    4) Make sure that the computer is not over utilized.
    No indication of overutilization
    Thank you for the suggestions!  Do you know of anything else to check?

  • Interactive report run pl/sql by clicking on link column

    Hi!
    How can I run pl/sql script by clicking on a link column of an interactive report?
    Andras

    EDIT : Oh sorry don't saw the date, I'm a little late...
    Hi,
    if you want a link for each row, an other solution is to include the link column in your query, which would be more customizable, ie :
    select
    COLUMN1,
    COLUMN2,     
    '<ahref="f?p=&APP_ID.:6:&APP_SESSION.::NO::" onclick="your_function('||YOUR_ID_COLUMN||');return false;">link or picture</a>' as "link",
    COLUMN3;
    from
    ...As you see, you will have to include the ID (primary key) column to your report ("hidden") to identify the row clicked when calling your function.
    If you want to use the Link Column property in the report attributes, you will have to call you js function in the "URL" field like that :
    javascript:your_function('#YOUR_ID_COLUMN#');
    Yann.
    Edited by: Yann39 on 17 sept. 2010 02:41

  • Problem-Report generation using shell script

    Hi
    We have the Production database and the Reporting database (copy of Production database),
    both on Oracle 9.2.0.5 and Solaris 5.8. There is a package inside the Oracle database, which extracts some data from inside the
    database, and would generate a report. A shell script has been created in Solaris which would
    send in the parameters and call the pakage to generate the report. The parameters it is sending is
    the name of report to be generated, and the location where it is to be generated, both hard-coded into
    the script. The script is scheduled to run through crontab.
    The problem we are facing is that, if we run the script for Reporting database, it successfully
    generates the report. But if we use that script for Production database, it gives the error
    "Invalid directory Path". I have tried using various other directory paths, even '/tmp'
    and '/', but it still gives the same error when executed for Production dataabse.
    Could somebody provide any ideas what might be going wrong.
    The reasons it is to be executed on Prod db and not the Reporting database are unavoidable.
    It anyway runs in off business hours and takes about 10secs to execute.
    Please do let me know if there is any other info that I missed to provide here.
    Thanks in advance...

    I will be just guessing because you didn't provide contents of script and package.
    The "Invalid directory path" as you said could be ORA-29280 due non existent directory.
    Try execute (as sys or system) select * from dba_directories; (or select * from all_directories; as user which the script is login to) on both databases and compare the results. If there is missing your important directory then create it using create directory <dirname>; (from sqlplus and don't forget to grant rights for user).
    This error could come from shell script. In that case you should find resolution yourself because you didn't provide script source.

  • Calling a report from unix shell script

    Hi,
    I had to call a report from unix shell script.
    May i know the procedure to accomplish this
    Thanks in Advance
    A.Gopal

    First you should not include the whole path to your report in the call ...
    Use like this:
    /ora/u01/oracle/v101/as2/bin/rwrun.sh report=an_stati destype=file desname=/ora/u01/oracle/v101/as2/test.pdf desformat=pdf
    In $ORACLE_HOME/bin/reports.sh:
    1) Verify that you have updated the REPORTS_PATH variable to include your folder where you have the report in question
    REPORTS_PATH=/ora/u20/app/qits/env1/run:$ORACLE_HOME/reports/templates:$ORACLE_HOME/reports/samples/demo: ....
    2) Verify that the REPORTS_TMP variable is pointing to a valid location and that the oracle user has access to write on it.
    After that, post the content of the tracefile located at $ORACLE_HOME/reports/logs/{in-process report server name folder}/rwserver.trc
    If no file is present then it means that you need to enable trace in your reports's conf file.... go to the $ORACLE_HOME/reports/conf folder and and locate the .conf file that correspond to your in-process reports server name (as specified in the rwservelet.properties file)... open/edit the file to enable trace logs ..
    i.e.
    Change the following line:
    <!--trace traceOpts="trace_all"/-->
    to <trace traceOpts="trace_all"/>
    Bounce the reports server and try to run the report again, this time the .trc file should be generated, post the content so that we can take a look.

  • How can i see Log information of Report Run Time?

    Hi Gurus,
    How can i see Log information of Report Run Time?, till now i am counted report run time manually. Is there any way i can see the workbook running time in log information?.
    Thanks & Regards
    Vikram

    There could be a few things -
    At one time, you needed to run a separate script to create the tables. I'm not sure that is still the case. If you check the Administrators guide, look into the chapter that deals with the EUL Status Workbooks.
    If you are not logged on as the eul owner, you may not have select privileges, or you may need to qualify the table with the schema (if there is no synonym) - select * from <eul_owner>.EUL5_QPP_STATS;
    If you are on 4i, the table is EUL4_QPP_STATS

  • Capturing log files from multiple .ps1 scripts called from within a .bat file

    I am trying to invoke multiple instances of a powershell script and capture individual log files from each of them. I can start the multiple instances by calling 'start powershell' several times, but am unable to capture logging. If I use 'call powershell'
    I can capture the log files, but the batch file won't continue until that current 'call powershell' has completed.
    ie.  within Test.bat
    start powershell . \Automation.ps1 %1 %2 %3 %4 %5 %6 > a.log 2>&1
    timeout /t 60
    start powershell . \Automation.ps1 %1 %2 %3 %4 %5 %6 > b.log 2>&1
    timeout /t 60
    start powershell . \Automation.ps1 %1 %2 %3 %4 %5 %6 > c.log 2>&1
    timeout /t 60
    start powershell . \Automation.ps1 %1 %2 %3 %4 %5 %6 > d.log 2>&1
    timeout /t 60
    start powershell . \Automation.ps1 %1 %2 %3 %4 %5 %6 > e.log 2>&1
    timeout /t 60
    start powershell . \Automation.ps1 %1 %2 %3 %4 %5 %6 > f.log 2>&1
    the log files get created but are empty.  If I invoke 'call' instead of start I get the log data, but I need them to run in parallel, not sequentially.
    call powershell . \Automation.ps1 %1 %2 %3 %4 %5 %6 > a.log 2>&1
    timeout /t 60
    call powershell . \Automation.ps1 %1 %2 %3 %4 %5 %6 > b.log 2>&1
    timeout /t 60
    call powershell . \Automation.ps1 %1 %2 %3 %4 %5 %6 > c.log 2>&1
    timeout /t 60
    call powershell . \Automation.ps1 %1 %2 %3 %4 %5 %6 > d.log 2>&1
    timeout /t 60call powershell . \Automation.ps1 %1 %2 %3 %4 %5 %6 > e.log 2>&1
    Any suggestions of how to get this to work?

    Batch files are sequential by design (batch up a bunch of statements and execute them). Call doesn't run in a different process, so when you use it the batch file waits for it to exit. From CALL:
    Calls one batch program from another without stopping the parent batch program
    I was hoping for the documentation to say the batch file waits for CALL to return, but this is as close as it gets.
    Start(.exe), "Starts a separate window to run a specified program or command". The reason it runs in parallel is once it starts the target application start.exe ends and the batch file continues. It has no idea about the powershell.exe process
    that you kicked off. Because of this reason, you can't pipe the output.
    Update: I was wrong, you can totally redirect the output of what you run with start.exe.
    How about instead of running a batch file you run a PowerShell script? You can run script blocks or call individual scripts in parallel with the
    Start-Job cmdlet.
    You can monitor the jobs and when they complete, pipe them to
    Receive-Job to see their output. 
    For example:
    $sb = {
    Write-Output "Hello"
    Sleep -seconds 10
    Write-Output "Goodbye"
    Start-Job -Scriptblock $sb
    Start-Job -Scriptblock $sb
    Here's a script that runs the scriptblock $sb. The script block outputs the text "Hello", waits for 10 seconds, and then outputs the text "Goodbye"
    Then it starts two jobs (in this case I'm running the same script block)
    When you run this you receive this for output:
    PS> $sb = {
    >> Write-Output "Hello"
    >> Sleep -Seconds 10
    >> Write-Output "Goodbye"
    >> }
    >>
    PS> Start-Job -Scriptblock $sb
    Id Name State HasMoreData Location Command
    1 Job1 Running True localhost ...
    PS> Start-Job -Scriptblock $sb
    Id Name State HasMoreData Location Command
    3 Job3 Running True localhost ...
    PS>
    When you run Start-Job it will execute your script or scriptblock in a new process and continue to the next line in the script.
    You can see the jobs with
    Get-Job:
    PS> Get-Job
    Id Name State HasMoreData Location Command
    1 Job1 Running True localhost ...
    3 Job3 Running True localhost ...
    OK, that's great. But we need to know when the job's done. The Job's Status property will tell us this (we're looking for a status of "Completed"), we can build a loop and check:
    $Completed = $false
    while (!$Completed) {
    # get all the jobs that haven't yet completed
    $jobs = Get-Job | where {$_.State.ToString() -ne "Completed"} # if Get-Job doesn't return any jobs (i.e. they are all completed)
    if ($jobs -eq $null) {
    $Completed=$true
    } # otherwise update the screen
    else {
    Write-Output "Waiting for $($jobs.Count) jobs"
    sleep -s 1
    This will output something like this:
    Waiting for 2 jobs
    Waiting for 2 jobs
    Waiting for 2 jobs
    Waiting for 2 jobs
    Waiting for 2 jobs
    Waiting for 2 jobs
    Waiting for 2 jobs
    Waiting for 2 jobs
    Waiting for 2 jobs
    Waiting for 2 jobs
    When it's done, we can see the jobs have completed:
    PS> Get-Job
    Id Name State HasMoreData Location Command
    1 Job1 Completed True localhost ...
    3 Job3 Completed True localhost ...
    PS>
    Now at this point we could pipe the jobs to Receive-Job:
    PS> Get-Job | Receive-Job
    Hello
    Goodbye
    Hello
    Goodbye
    PS>
    But as you can see it's not obvious which script is which. In your real scripts you could include some identifiers to distinguish them.
    Another way would be to grab the output of each job one at a time:
    foreach ($job in $jobs) {
    $job | Receive-Job
    If you store the output in a variable or save to a log file with Out-File. The trick is matching up the jobs to the output. Something like this may work:
    $a_sb = {
    Write-Output "Hello A"
    Sleep -Seconds 10
    Write-Output "Goodbye A"
    $b_sb = {
    Write-Output "Hello B"
    Sleep -Seconds 5
    Write-Output "Goodbye B"
    $job = Start-Job -Scriptblock $a_sb
    $a_log = $job.Name
    $job = Start-Job -Scriptblock $b_sb
    $b_log = $job.Name
    $Completed = $false
    while (!$Completed) {
    $jobs = Get-Job | where {$_.State.ToString() -ne "Completed"}
    if ($jobs -eq $null) {
    $Completed=$true
    else {
    Write-Output "Waiting for $($jobs.Count) jobs"
    sleep -s 1
    Get-Job | where {$_.Name -eq $a_log} | Receive-Job | Out-File .\a.log
    Get-Job | where {$_.Name -eq $b_log} | Receive-Job | Out-File .\b.log
    If you check out the folder you'll see the log files, and they contain the script contents:
    PS> dir *.log
    Directory: C:\Users\jwarren
    Mode LastWriteTime Length Name
    -a--- 1/15/2014 7:53 PM 42 a.log
    -a--- 1/15/2014 7:53 PM 42 b.log
    PS> Get-Content .\a.log
    Hello A
    Goodbye A
    PS> Get-Content .\b.log
    Hello B
    Goodbye B
    PS>
    The trouble though is you won't get a log file until the job has completed. If you use your log files to monitor progress this may not be suitable.
    Jason Warren
    @jaspnwarren
    jasonwarren.ca
    habaneroconsulting.com/Insights

  • Financial Reporting batch reports run slow at random

    FR reports version: 11.1.2.1
    This is a summary of our environment:
    -Linux server where Hyperion batch reports run
    -Linux (SLES-11) server where Essbase resides
    -Windows 2008 server where Hyperion Print Service and GhostScript reside
    This is what we observed this morning:
    Batch reports are executed serially by a master shell script.
    One of these reports takes 30+ minutes to complete. The exact same report usually completes in less than 3 minutes.
    This problem occurs 3 to 5 times each morning, and not necessarily on the same report.
    While the report shows ‘Running’ in the Hyperion Batch Scheduler, there is no process present on reports server. There is no active session on Essbase. There is no active print process on the print server.
    Eventually, the report completes successfully and the pdf file is present on reports server.
    We haven’t been able to determine where the process is during the period that it appears to be stuck. The best we can do is to look at the status on the Batch Scheduler. It is not a practical option for us to schedule everything using the Batch Scheduler on a daily basis.(this is what Oracle is suggesting, by the way).
    Has anybody experienced similar FR batch reports slowness?
    Thanks.

    IT parsed the HTML into csv so that another program could compare a Trial Balance from HFM with a Trial Balance from the source GL system to ensure all data got imported to HFM.

  • FIM Reporting and SCSM Database Query Issue

    Hello,
    We have been having issues with FIM Reporting, the ETL Process for some reason seems to be failing, we further drilled down and found that there was a SQL Query running on the SCSM database Server for a very long time.
    "CREATE PROCEDURE dbo.[p_GroomManagedEntity]  (      @TargetId uniqueidentifier,      @RetentionPeriodInMinutes int,      @GroomingCriteria nvarchar(max),      @BatchSize int  )
     AS  BEGIN      DECLARE @LastErr int;      DECLARE @RowCount int = 1;      DECLARE @TotalRowCount int = 0;      DECLARE @RetentionDateTime DATETIME;      DECLARE @SelectEntitiesToBeGroomedStmt
    nvarchar(max);      DECLARE @CoreDeleteTypedEntitiesTable TypedManagedEntityType;      DECLARE @TimeGenerated DATETIME = getutcdate();      DECLARE @Command nvarchar(MAX)      DECLARE @GroomHistoryId
    bigint      DECLARE @Comment nvarchar(max);          SET @Command = N'Exec dbo.p_GroomManagedEntity ' + CAST(@TargetId AS nvarchar(40)) + ', ' + CAST(@RetentionPeriodInMinutes  AS nvarchar(10)) +
    ', ' + CAST(@GroomingCriteria  AS nvarchar(100)) + ', ' + CAST(@BatchSize AS nvarchar(10))         -- Call the grooming history insert sproc       EXEC @LastErr = dbo.p_InternalJobHistoryInsert @Command,
    @GroomHistoryId OUT      IF @LastErr <> 0          GOTO Err;        CREATE TABLE #BaseManagedEntitiesToDelete      (          BaseManagedEntityId uniqueidentifier
         );          -- Figure out the retention datetime      SELECT @RetentionDateTime = DATEADD(mi, -@RetentionPeriodInMinutes, getutcdate())        -- Execute the grooming filter statement,
    hence populate the table variable, with "BatchSize" many entities.      WHILE (@RowCount > 0)      BEGIN          INSERT #BaseManagedEntitiesToDelete EXEC sp_executesql @GroomingCriteria,
    N'@Retention DATETIME,@TargetTypeId uniqueidentifier,@NumOfEntities INT',                   @Retention = @RetentionDateTime, @TargetTypeId = @TargetId, @NumOfEntities = @BatchSize;          
     SELECT @LastErr = @@ERROR, @RowCount = @@ROWCOUNT;          IF @LastErr <> 0              GOTO Err;                    IF (@RowCount >
    0)          BEGIN              -- Convert the BMEIds to TMEIds.              INSERT @CoreDeleteTypedEntitiesTable              SELECT
    TME.TypedManagedEntityId              FROM #BaseManagedEntitiesToDelete D              JOIN dbo.TypedManagedEntity TME                  ON D.BaseManagedEntityId
    = TME.BaseManagedEntityId              WHERE TME.IsDeleted = 0;                                       SELECT @LastErr = @@ERROR;
                 IF @LastErr <> 0                  GOTO Err;                                --
    Use existing DDP code to delete the instances captured in the temp table.                  EXEC @LastErr = dbo.p_DDPWrapperForGroomManagedEntity @TimeGenerated, @CoreDeleteTypedEntitiesTable;      
           IF @LastErr <> 0                  GOTO Err;                                TRUNCATE TABLE #BaseManagedEntitiesToDelete;
                 SELECT @LastErr = @@ERROR;              IF @LastErr <> 0                  GOTO Err;          END
                       SET @TotalRowCount = @TotalRowCount + @RowCount;      END            -- Call the grooming history insert sproc to update status to success
         SET @Comment = N'BaseManagedEntity: ' + CAST(@TotalRowCount AS nvarchar(10))      EXEC @LastErr = dbo.p_InternalJobHistoryUpdate @GroomHistoryId, 1, @Comment      IF @LastErr <> 0      
       GOTO Err;        RETURN 0        Err:        -- Call the grooming history insert sproc to update status to failure.      SET @Comment = N'BaseManagedEntity: ' + CAST(@TotalRowCount
    AS nvarchar(10))      EXEC @LastErr = dbo.p_InternalJobHistoryUpdate @GroomHistoryId, 2, @Comment      IF @LastErr <> 0          GOTO Err;        RETURN 1  END"
    Can somebody advise on what this query is really about and what is its fuction, we are thinking of killing this query since it has been running for a very long time, will that hamper or cause the database to corrupt.
    Rgds,
    Abhishek.

    Vijay,
    Thanks for you reply.
    I figure out a related bug:
    Bug 12859472: Cannot browse store procedure in case-sensitive MS SQL Database
    There are two possible workarounds:
    1. Use a database name with capital letters
    2. Do not use stored procedures, but access the tables directly.
    The notes on the Bug ticket describes that the issue would be scheduled to be fixed in PS7 which is 11.1.1.8.
    Cheers!
    Leandro.

Maybe you are looking for

  • 2nd dead iPod Touch 4G in 2 months?  Battery/charging problems!

    Hi everyone, It seems as of tonight that I will have to return my second iPod Touch 4G in the last two months. In both cases, it was an issue where the device would no longer charge, and became very hot on the top of the back panel when the device wa

  • No organizer in Photoshop Elements 9 for Mac?

         Is that normal? I've been trying to learn how to use photoshop with online tutorials, but they are not much help when every video says to start in organizer which I don't have it. There isn't even a button for it. How do you import your photos i

  • Help with 4506 802.1x Port Based Authentication (Wired)

    Hi all, I'm trying to configure wired 802.1x security on a Catalyst 4506 IOS 12.1.19(EW), using Microsoft IAS (Microsoft's RADIUS), and Windows 2000 SP4 clients. I've followed the procedures in the 4506 Software configuration guide and they seem to b

  • I want to re-install OS but keep all the photos in the iphoto, is that possible?

    i want to re-install OS but keep all the photos in the iphoto, is that possible? delete all the things but photos, i had locations, categories and remarks in the iphotos, i don't want to miss them.

  • How to set fixed MinimumFractionDigits

    Hello Friends, I wanted to set decimals always in xxx.00 format irrespective value of decimal.How to insure that there will be always two decimal places present? Thanks,