Optimizer job failure

Hi,
We have encountered a similar failure in Optimizer background job for two days in a row. The Optimizer fails with an error which says "An Exception occurred in communication object.".
Please let me know if running a consistency check on APO database would be of any help to resolve this issue. Thank you.
Rishikesh

Hi Rishikesh,
You could check if your optimizer server is reachable properly from SCM.
The issue coud most likely lie in Basis domain.
There is a SAP document on Optimizer setup, but unfortunately I can't attach it here.
I am copy pasting some things that could be checked by Basis team (Choose the Optimizer below that you use):
1. Log on to the SAP SCM System.
2. Call transaction SM59.
The Display and maintain RFC destinations screen appears.
3. Open the node for TCP/IP connection.
4. For the first optimizer server, you have to adapt the following RFC entries:
- OPTSERVER_CTM01
- OPTSERVER_DPS01
- OPTSERVER_SNP01
- OPTSERVER_SEQ01
- OPTSERVER_VSR01
- OPTSERVER_MMP01
- OPTSERVER_CS01
For the second optimizer server, the RFC entry names end with 02 (for example,
OPTSERVER_CTM02) and so on.
To adapt an RFC entry:
a. Double-click the destination name OPTSERVER_<Optimizer>01.
The RFC Destination OPTSERVER_<Optimizer>01 screen appears.
b. Depending on the server you must do the following to check the RFC entries:
A) Standalone Optimizer Server:
i. Choose Start on Explicit host.
ii. In the Program field check your program path (see table Program Paths of RFC Entries below).
iii. Check the name of your Target Host.
iv. Enter the number of the gateway host and the corresponding gateway service SAPGW<GW_NO>. You can find out the required parameters on your target host as follows:
a. On your target host, call transaction SMGW
b. Choose Goto u2192 Parameters u2192 Display (see entries for gateway hostname and gateway service)
v. Confirm with O.K.
B) Optimizer and SAP SCM on Same Server:
i. Choose Start on Application server.
ii. In the Program entry field check your program path (see table Program Paths of RFC Entries below).
If your SAP SCM server is an Unicode system, you must do the following in addition to the above setting for each OPTSERVER_<Optimizer>01 destination:
a. Choose the tab MDMP & Unicode.
b. In the group frame Communication Type With Target System you must select the flag Non-Unicode and the flag Inactive for the MDMP Settings box.
I hope this would be helpful in some basic check.
If your basis team is not able to resolve the issue, then better to reach out to SAP through OSS.
Thanks - Pawan

Similar Messages

  • HP ePrint Home&Biz app not working. Print job failure:busy message appears on tablet

    I have an HP Laserjet P1102W and am trying to print from a Viewsonic tablet (android OS) using the HP ePrint Home&Biz app.  I can send an email to the printer via the tablet and it prints the document with no problems.
    I apologize in advance for the long message... 
    If I try and hit the PRINT key on the lower bar (which says 'HP Laserjet Professional P1102W (NPI7...host name), all that happens is blue & green LED's come on, green, blue flashes, green, blue flashes, green, blue flashes, then green and blue stay on steady.  Just before the blue and green come on steady at the end, the "print job failure:busy" message flashes on my tablet.
    I assume the sequence of LED flashes is the HP printer receiving a print command 3 times and can not perform the task because the printer thinks it is busy.
    After searching for this problem on the net, it would appear I am not the only person with this issue.  And yes, I have tried powering down my modem, printer, PC, and tablet and restarting all the above.  Not that the PC has anything to do with it, but I am desperate. 
    Does anyone have a viable working solution to this problem???

    Hello,
    Thanks for the post.  With this one, there is a firmware update available for the printer, and I've included a link below that with some excellent steps to check regarding this issue.  Good Luck!
    http://h10025.www1.hp.com/ewfrf/wc/softwareCategory?os=219&lc=en&cc=us&dlc=en&sw_lang=&product=41103...
    http://h10025.www1.hp.com/ewfrf/wc/document?docname=c02933944&cc=us&dlc=en&lc=en&product=4110396&tmp...
    I worked for HP but my posts and replies are my own....Thank you!
    *Say thanks by clicking the *Kudos!* which is on the left*
    *Make it easier for other people to find solutions, by marking my answer with (Accept as Solution) if it solves your issue.*

  • How to find out batch job failure and taking action:

    Normally We will monitor the batch jobs  through transaction code sm37 for job monitoring. In SM37 we will give a batch job name date and time  as  input. In the first step we will check the batch job for the reason failure or check the spool request for the batch job for failures an help in analyzing the error
    I understand from the my experience is that the batch may fail due to below reasons.
    1.,Data issues :             ex: Invalid character in quantity (Meins) field  >>>> We will correct the corresponding document with correct value or we will manually run or request the team to rerun the batch job by excluding  the problematic documents from the batch job variant  so that it may process other documents.
    2.Configuration issues : Materials XXXX is not extended for Plant >>>> we will contact the material master team or business to correct the data or we will raise sub contract call with support team to correct he data. Once the data been corrected and will request the team to rerun the batch job.
    3.Performance issues : Volume of the data being processed  by the batch job ,network problems.>>>Normally these kind of issues we will encounter during the month end process as there will lot of accounting transactions or documents being posted business hence it may cause the batch job failure as there is enough memory to complete the program or select queries in the program will timeout because of volume of the records.
    4.Network issues. : Temporary connectivity issues in other partner systems :Outage in other partner systems like APO or other system like GTS  will cause the batch job failure as Batch job not in position to connect other system to get the inforamtion and proceed for further steps.Nornmally we will check RFC destination status by running a custom program  weather connectivity between system are in progress or not. then intimate other partner system  for the further actions, Once the partner system comes online then we will intimate the team to restart or manually submit batch job.
    Some times we will create a manual job by transaction code SM36.

    I'm not sure what the question is among all that but if you want to check on jobs that are viewable via SM37 and started via SM36. The tables are TBTCP -Background Job Step Overview and TBTCO - Job Status Overview Table.
    You can use the following FM to get job details:
    GET_JOB_RUNTIME_INFO - Reading Background Job Runtime Data

  • W2K12-R deduplication optimization job stuck at 100% for days

    Hello,
    I've W2K12-R server (VM, 4 vCPU, 14 GB vRAM) with a 7 TB D: drive, deduplication is enabled for this drive. I use this drive for backing up virtual machines, at the moment 3,6 TB are used. Each night ~2 TB new data is backed up, only the C: drives of VMs,
    so there is a very high dedup rate, these new 2 TB are usually deduplicated during daytimes.
    Capacity                 : 6.64 TB
    FreeSpace               : 3.01 TB
    UsedSpace              : 3.63 TB
    UnoptimizedSize        : 110.74 TB
    SavedSpace             : 107.1 TB
    SavingsRate             : 96 %
    Since a few weeks I've the problem that ThroughputOptimization jobs (all Optimization jobs) do not finish. The job is able to free space but it stays at 100% for days.
    Although I've set the DurationHours to 6 or 9 hours, the scheduled job does not end and thus no new Optimization job starts. That leads to a completely filled D drive because the space that gets filled by backup jobs during the night  is not freed during
    the planned time frame.
    Event Log from the start of the job:
    +
    System
    Provider
    [ Name]
    Microsoft-Windows-Deduplication
    [ Guid]
    {F9FE3908-44B8-48D9-9A32-5A763FF5ED79}
    EventID
    6148
    Version
    0
    Level
    4
    Task
    0
    Opcode
    0
    Keywords
    0x8000000000000000
    TimeCreated
    [ SystemTime]
    2014-07-21T14:20:04.531120000Z
    EventRecordID
    2768
    Correlation
    Execution
    [ ProcessID]
    4672
    [ ThreadID]
    268
    Channel
    Microsoft-Windows-Deduplication/Operational
    Computer
    xxxxxx.net
    Security
    [ UserID]
    S-1-5-18
    EventData
    JobType
    1
    JobInstanceId
    {39D260C4-32D0-4511-8982-598DA53DB423}
    VolumeGuidPath
    \\?\Volume{27fe6999-df3b-4864-81fe-853126e2c9cc}\
    VolumeDisplayName
    D:
    AvailableMemoryMb
    11058
    JobPriorityType
    3
    I don't see further information about job 39D260C4-32D0-4511-8982-598DA53DB423 in event log. In resource monitor I see that fsdmhost has very little disk activity (600 Bytes/s), but a cpu load of 25% (which is one of the four vCPUs).
    I stopped the backups this week and started a manual full Optimization job, which stays at 100% since more than one day now.
    Any ideas what to check? Any patch I can apply?
    Some details:
    23.07.014 - 8:40 Uhr
    Get-DedupStatus
    FreeSpace    SavedSpace   OptimizedFiles     InPolicyFiles      Volume
    3.01 TB      107.1 TB     12232              12232              D:
    Get-DedupStatus | fl
    Volume                             : D:
    VolumeId                           : \\?\Volume{27fe6999-df3b-4864-81fe-853126e2c9cc}\
    Capacity                           : 6.64 TB
    FreeSpace                          : 3.01 TB
    UsedSpace                          : 3.63 TB
    UnoptimizedSize                    : 110.74 TB
    SavedSpace                         : 107.1 TB
    SavingsRate                        : 96 %
    OptimizedFilesCount                : 12232
    OptimizedFilesSize                 : 110.58 TB
    OptimizedFilesSavingsRate          : 96 %
    InPolicyFilesCount                 : 12232
    InPolicyFilesSize                  : 110.58 TB
    LastOptimizationTime               : 23.07.2014 04:44:59
    LastOptimizationResult             : 0x8056533D
    LastOptimizationResultMessage      : The operation was cancelled.
    LastGarbageCollectionTime          : 20.07.2014 05:26:12
    LastGarbageCollectionResult        : 0x00000000
    LastGarbageCollectionResultMessage : The operation completed successfully.
    LastScrubbingTime                  : 20.07.2014 16:06:29
    LastScrubbingResult                : 0x00000000
    LastScrubbingResultMessage         : The operation completed successfully.
    Get-DedupVolume
    Enabled            UsageType          SavedSpace           SavingsRate         
    Volume
    True               Default            107.1 TB             96 %                
    D:
    Get-DedupVolume |fl
    Volume                   : D:
    VolumeId                 : \\?\Volume{27fe6999-df3b-4864-81fe-853126e2c9cc}\
    Enabled                  : True
    UsageType                : Default
    DataAccessEnabled        : True
    Capacity                 : 6.64 TB
    FreeSpace                : 3.01 TB
    UsedSpace                : 3.63 TB
    UnoptimizedSize          : 110.74 TB
    SavedSpace               : 107.1 TB
    SavingsRate              : 96 %
    MinimumFileAgeDays       : 0
    MinimumFileSize          : 32768
    NoCompress               : False
    ExcludeFolder            :
    ExcludeFileType          :
    ExcludeFileTypeDefault   : {edb, jrs}
    NoCompressionFileType    : {asf, mov, wma, wmv...}
    ChunkRedundancyThreshold : 100
    Verify                   : False
    OptimizeInUseFiles       : False
    OptimizePartialFiles     : False
    Get-DedupMetadata | fl
    Volume                         : D:
    VolumeId                       : \\?\Volume{27fe6999-df3b-4864-81fe-853126e2c9cc}\
    StoreId                        : {1E87DD60-0265-4726-9EDD-81FCE4E9D70A}
    DataChunkCount                 : 151954987
    DataContainerCount             : 3983
    DataChunkAverageSize           : 24.35 KB
    DataChunkMedianSize            : 0 B
    DataStoreUncompactedFreespace  : 0 B
    StreamMapChunkCount            : 95398
    StreamMapContainerCount        : 1944
    StreamMapAverageDataChunkCount :
    StreamMapMedianDataChunkCount  :
    StreamMapMaxDataChunkCount     :
    HotspotChunkCount              : 1048520
    HotspotContainerCount          : 54
    HotspotMedianReferenceCount    :
    CorruptionLogEntryCount        : 0
    TotalChunkStoreSize            : 3.6 TB
    Get-DedupSchedule
    Enabled    Type               StartTime              Days              
    Name
    True       Optimization                                                
    BackgroundOptimization
    False      GarbageCollection  07:30                  {Monday, Tuesda... DailyGarbageCollection
    True       Optimization       09:00                  {Sunday, Monday... ThroughputOptimization
    False      Optimization       14:30                  {Sunday, Monday... ThroughputOptimization-2
    True       GarbageCollection  16:00                  Saturday           WeeklyGarbageCollection
    True       Scrubbing          03:45                  Sunday            
    WeeklyScrubbing
    Get-DedupSchedule | fl
    Name                     : BackgroundOptimization
    Enabled                  : True
    Type                     : Optimization
    Days                     :
    Start                    :
    DurationHours            :
    StopWhenSystemBusy       : True
    Memory                   : 25 %
    Priority                 : Low
    InputOutputThrottleLevel : Low
    ScheduledTask            : \Microsoft\Windows\Deduplication\BackgroundOptimization
    Full                     :
    ReadOnly                 :
    Name                     : DailyGarbageCollection
    Enabled                  : False
    Type                     : GarbageCollection
    Days                     : {Monday, Tuesday, Wednesday, Thursday...}
    Start                    : 17.06.2014 07:30:00
    DurationHours            : 0
    StopWhenSystemBusy       : False
    Memory                   : 50 %
    Priority                 : Normal
    InputOutputThrottleLevel : None
    ScheduledTask            : \Microsoft\Windows\Deduplication\DailyGarbageCollection
    Full                     : False
    ReadOnly                 : False
    Name                     : ThroughputOptimization
    Enabled                  : True
    Type                     : Optimization
    Days                     : {Sunday, Monday, Tuesday, Wednesday...}
    Start                    : 14.07.2014 09:00:00
    DurationHours            : 6
    StopWhenSystemBusy       : False
    Memory                   : 40 %
    Priority                 : Normal
    InputOutputThrottleLevel : None
    ScheduledTask            : \Microsoft\Windows\Deduplication\ThroughputOptimization
    Full                     : False
    ReadOnly                 : False
    Name                     : ThroughputOptimization-2
    Enabled                  : False
    Type                     : Optimization
    Days                     : {Sunday, Monday, Tuesday, Wednesday...}
    Start                    : 14.07.2014 14:30:00
    DurationHours            : 9
    StopWhenSystemBusy       : False
    Memory                   : 40 %
    Priority                 : Normal
    InputOutputThrottleLevel : None
    ScheduledTask            : \Microsoft\Windows\Deduplication\ThroughputOptimization-2
    Full                     : False
    ReadOnly                 : False
    Name                     : WeeklyGarbageCollection
    Enabled                  : True
    Type                     : GarbageCollection
    Days                     : Saturday
    Start                    : 20.06.2014 16:00:00
    DurationHours            : 26
    StopWhenSystemBusy       : True
    Memory                   : 50 %
    Priority                 : High
    InputOutputThrottleLevel : None
    ScheduledTask            : \Microsoft\Windows\Deduplication\WeeklyGarbageCollection
    Full                     : True
    ReadOnly                 : False
    Name                     : WeeklyScrubbing
    Enabled                  : True
    Type                     : Scrubbing
    Days                     : Sunday
    Start                    : 17.05.2014 03:45:00
    DurationHours            : 0
    StopWhenSystemBusy       : True
    Memory                   : 50 %
    Priority                 : Normal
    InputOutputThrottleLevel : None
    ScheduledTask            : \Microsoft\Windows\Deduplication\WeeklyScrubbing
    Full                     : True
    ReadOnly                 : False
    Get-DedupJob
    Type               ScheduleType       StartTime              Progress   State                 
    Volume
    Optimization       Scheduled                                 0 %       
    Queued                 D:
    Optimization       Manual             16:20                  100 %     
    Running                D:
    Get-DedupJob |fl
    Volume                   : D:
    VolumeId                 : \\?\Volume{27fe6999-df3b-4864-81fe-853126e2c9cc}\
    Type                     : Optimization
    ScheduleType             : Scheduled
    StartTime                :
    Progress                 : 0 %
    State                    : Queued
    Id                       : {556DC9B2-068C-4CAE-9BE7-4104A0963F49}
    StopWhenSystemBusy       : True
    Memory                   : 25 %
    Priority                 : Low
    InputOutputThrottleLevel : Low
    ProcessId                : 0
    Full                     : False
    ReadOnly                 : False
    Volume                   : D:
    VolumeId                 : \\?\Volume{27fe6999-df3b-4864-81fe-853126e2c9cc}\
    Type                     : Optimization
    ScheduleType             : Manual
    StartTime                : 21.07.2014 16:20:04
    Progress                 : 100 %
    State                    : Running
    Id                       : {39D260C4-32D0-4511-8982-598DA53DB423}
    StopWhenSystemBusy       : False
    Memory                   : 60 %
    Priority                 : High
    InputOutputThrottleLevel : None
    ProcessId                : 4672
    Full                     : True
    ReadOnly                 : False

    I stopped the manually started optimization job now and see these event log entries.
    Data Deduplication job type "Optimization" on volume "D:" was cancelled manually.
    Data Deduplication job of type "Optimization" on volume "D:" has completed with return code: 0x8056533d, The operation was cancelled.
    Optimization reconciliation has completed.
    Volume: D: (\\?\Volume{27fe6999-df3b-4864-81fe-853126e2c9cc}\)
    Reconciled containers: 1908
    Unreconciled containers: 36
    Merged containers: 0
    Total reconciled references: 0
    Error code: 0x0
    Error message: NULL
    Optimization job has completed.
    Volume: D: (\\?\Volume{27fe6999-df3b-4864-81fe-853126e2c9cc}\)
    Error code: 0x0
    Error message:
    Savings rate: 96
    Saved space: 117761059523158
    Volume used space: 3988537147392
    Volume free space: 3312769826816
    Optimized file count: 12232
    In-policy file count: 12232
    Job processed space (bytes): 1815468262325
    Job elapsed time (seconds): 150522
    Job throughput (MB/second): 11.5

  • Utility data collection job Failure on SQL server 2008

    Hi,
    I am facing data collection job failure issue (Utility-data Collection) on SQL server 2008 server for, below is the error message as  :
    <service Name>. The step did not generate any output.  Process Exit Code 5.  The step failed.
    Job name is collection_set_5_noncached_collect_and_upload, as I gothrough the google issue related to premission issue but where exactly the access issues are coimng, this job is running on proxy account. Thanks in advance.

    Hi Srinivas,
    Based on your description, you encounter the error message after configuring data collection in SQL Server 2008. For further analysis, could you please help to collect detailed log information? You can check the job history to find the error log around the
    issue, as is mentioned in this
    article. Also please check Data Collector logs by right-clicking on Data Collection in the Management folder and selecting View Logs.
    In addition, as your post, the exit code 5 is normally a ‘Access is denied ’ code.  Thus please make sure that the proxy account has admin permissions on your system. And ensure that SQL Server service account has rights to access the cache folder.
    Thanks,
    Lydia Zhang

  • Auto alert mechanism for ATG scheduled job failure and BCC project failure

    Hello all,
    Could you please confirm if there are auto alert mechanisms for ATG scheduled job failure and BCC project failure?
    Waiting for reply.
    Thanks and regards,

    Hi,
    You need to write custom code to get alerts if an ATG Scheduler fails.
    For BCC project deployment monitoring, please refer to the below link in the documentation.
    Oracle ATG Web Commerce - Configure Deployment Event Listeners
    Thanks,
    Gopinath Ramasamy

  • How to check job failure reasons in Prime Infrastructure

    We have PI version 1.3. I applied a CLI template and I saw the job's last run status is Failure in the Jobs Dashboard. But I cannot see any detail information about why it failed. Is there anyway we can see the job failure reasons? Thanks.

    Thanks for the tip. Acutally I cannot see that small circle in jobs dashboard. I finally found out I need to click on the job, then click on History, then the small circle is there under the History.

  • How to find the reason for a job failure

    Hi,
    I have created a job .
    It was running fine. But after few days it showed the status as broken.
    How do we find the reason for a job failure ?
    Thanks.

    There should be a trace file in the either the udump or bdump (depending on Oracle version) directory on the DB server. If the job is broken it has probably failed 16 times, so you should have 16 trace files - each possibly showing the same error. The relevant trace files will have j00 in the name, showing that they were generated from dbms_job.
    Ben

  • Getting 'Skipped for Optimization' job state with %complete less than 100 - PWA PS 2010

    I am getting Job State = Skipped for Optimization for
    Job Type = Status Update with %Complete = 16%.. Now this job type shouldn't be skipped for optimization as per the understanding that MS is intelligently skipping duplicate jobs. Moreover there are no pre and post jobs in queue for
    the same project. What could be the reason here? using PWA PS 2010

    Eaman,
    I saw that you posted  the question in another thread as well. If you see Brian's reply from here, the Status Update job could also be skipped, and it is not necessarily a duplicate job.
    See here: http://social.technet.microsoft.com/Forums/projectserver/en-US/cefa327b-2ada-4a50-aacf-b9d1f2082a45/status-update-skipped-for-optimization-job-state-at-20-in-project-server-queue?forum=projectserver2010general
    Prasanna Adavi,PMP,MCTS,MCITP,MCT http://thinkepm.blogspot.com

  • XML publisher job failure

    I got this type of message
    ACTION REQUIRED: XML Publisher Job Failure in qait2 Instance for Request 55086604.
    what should i do this type of case?

    Pl post details of OS, database and EBS versions. Did this ever work before ? If so, what changes have been made ?
    Can you post the contents of the xdo.cfg file ?
    Document Processor Errors With "oracle.xml.parser.v2.XMLParseException: '--' is not allowed in comme          (Doc ID 388388.1)
    HTH
    Srini

  • Regarding production job failure

    Hi Frnds,
    There is a production job failure when i check the logs iam finding the fallowing error.
    Restructuring of Database [Prepay] Failed(Error(1007045))
    Please let me know if you have any ideas.
    Thanks,
    Ram
    Edited by: KRK on Jun 23, 2009 12:34 PM

    Hi Glen,
    I have changed these factors to improve data loading time. After these changes job is being failed. I have tried by changing their cache to original values. But job is failing and here is the detail log please have a look and let me know where this is failing
    Iam using an ASO cube and iam building and loading cube through Essbase Integration services.
    Here are the details logs
    Received Command Get Database State
    Wed Jun 24 08:45:45 2009Local/Prepay///Info(1013210)
    User http://thomas.ryan set active on database Prepay
    Wed Jun 24 08:45:45 2009Local/Prepay/Prepay/thomas.ryan/Info(1013091)
    Received Command AsoAggregateClear from user http://thomas.ryan
    Wed Jun 24 08:45:45 2009Local/Prepay/Prepay/thomas.ryan/Error(1270028)
    Cannot proceed: the cube has no data
    Wed Jun 24 08:45:45 2009Local/Prepay///Info(1013214)
    Clear Active on User http://thomas.ryan Instance [1]
    Here we have designed the process such that it has to build dimensions first and the load data and then default aggregation takes place.
    Changes i have made are i have changed the fallowing settings
    1) Changed the Application pending cache size limit from 32mb to 64mb
    2) Changed the database data retrival buffers(buffer size and sort buffer size cache) from 10kb to 512kb.
    My system configuration details
    OS: windows 2003 server
    Ram: 4 gb
    What would be the right parameters to proceed with application taking all the points into consideration
    Please let me know if you have faced similar kind of issue or any ideas regarding this issue.
    Thanks,
    Ram

  • Job Failure Notification

    Anyone figured out if there is a notification rule for job status?

    Setting up Job Failure notification was very easy in 9i OEM.
    In 10g GC, we need to create job library & submit then create another rule for calling that job. Then click each user/admin to subcribe the rule.
    Again each user has to make sure that their notification schedule.
    I am thinking that they should have allow us to control the notification on Job creation screen itself instead of going through rule.

  • Server 2012 R2 RDS Personal Collection -reuse VM names after a partial job failure

    I am currently testing RDS on server 2012 R2, and as part of the test I have built and rebuilt multiple collections of VMs. I have noticed that when creating a new JOB to build a personal collection, sometimes I get VM build failures on one host. The build
    failures are usually due to a networking issue, which I then resolve, but when I go to add/create the VMs that failed, the name of the VMs increment from the last VM, and they don't re-use the failed names. I want to be able to reuse/build the VMs that
    failed.
    eg.
    VM-01, VM-02, VM-03 - On HOST-01 all work
    VM-04, VM-05, VM-06 - On HOST-02 fail
    VM-07, VM-08, VM-09 - On HOST-03 all work
    So on this example, when I re-run the job to build the missing/failed VMs, it would build  VM-10, VM-11, VM-12 on HOST-02.
    Is there a way to reset, or reuse the failed VM names, in the example above that would be VM-04, VM-05, and VM-06?
    Thanks

    Hi,
    Thank you for posting in Windows Server Forum.
    For a try you can use powershell command for RD VDI infrastructure.
    New-RDVirtualDesktopCollection -CollectionName "ITCamp" -PooledManaged -StorageType CentralSmbShareStorage -VirtualDesktopAllocation 5 -VirtualDesktopTemplateHostServer $VHost -VirtualDesktopTemplateName $VDITemplateVM -ConnectionBroker $RDBroker -Domain “contoso.com”
    -Force -MaxUserProfileDiskSizeGB 40 -CentralStoragePath”\\fileserver1\NormalVMs” -VirtualDesktopNamePrefix "ITC" -OU “VDICampUsers” -UserProfileDiskPath \\fileserver1\NormalProfiles
    More information.
    Lab Ops 7 – Setting up a pooled VDI collection in Windows Server 2012 R2
    http://blogs.technet.com/b/andrew/archive/2013/10/28/lab-ops-4-windows-8-1-windows-2012r2-vdi.aspx
    Also check that when setting up RDS, there needs to be a Physical NIC with IP for the creation of a RDS vSwitch. If this vswitch does not exist, the creation of the MV's will fail.
    Hope it helps!
    Thanks.
    Dharmesh Solanki

  • Oracle automatic statistics optimizer job is not running after full import

    Hi All,
    I did a full import in our QA database, import was successful, however GATHER_STATS_JOB is not running after sep 18 2010 though its enable and scheduled, i did query last_analyzed table to check and its confirmed that it didnt ran after sep18,2010.
    Please refer below for the output
    OWNER JOB_NAME ENABL STATE START_DATE END_DATE LAST_START_DATE NEXT_RUN_D
    SYS GATHER_STATS_JOB TRUE SCHEDULED 18-09-2010 06:00:02
    Oracle defined automatic optimizer statistics collection job
    =======
    SQL> select OWNER,JOB_NAME,STATUS,REQ_START_DATE,
    to_char(ACTUAL_START_DATE, 'dd-mm-yyyy HH24:MI:SS') ACTUAL_START_DATE,RUN_DURATION
    from dba_scheduler_job_run_details where
    job_name='GATHER_STATS_JOB' order by ACTUAL_START_DATE asc; 2 3 4
    OWNER JOB_NAME STATUS REQ_START_DATE ACTUAL_START_DATE
    RUN_DURATION
    SYS GATHER_STATS_JOB SUCCEEDED 16-09-2010 22:00:00
    +000 00:00:22
    SYS GATHER_STATS_JOB SUCCEEDED 17-09-2010 22:00:02
    +000 00:00:18
    SYS GATHER_STATS_JOB SUCCEEDED 18-09-2010 06:00:02
    +000 00:00:26
    What could be the reason for GATHER_STATS_JOB job not running although its set to auto
    SQL> select dbms_stats.get_param('AUTOSTATS_TARGET') from dual;
    DBMS_STATS.GET_PARAM('AUTOSTATS_TARGET')
    AUTO
    Does anybody has this kind of experience, please share
    Apprecitate your responses
    Regards
    srh

    ?So basically you are saying is if none of the tables are changed then GATHER_STATS_JOB will not run, but i see tables are updated still the job is not running. I did >query dba_scheduler_jobs and the state of the job is true and scheduled. Please see my previous post on the output
    Am i missing anything here, do i look for some parameters settings
    So basically you are saying is if none of the tables are changed then GATHER_STATS_JOB will not run,GATHER_STATS_JOB will run and if there are any table in which there's a 10 percent change in data, it will gather statistics on that table. If no table data have changes less than 10 percent, it will not gather statistics.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/stats.htm#i41282
    Hope this helps.
    -Anantha

  • How to retrive error message for sql agent job failure

    My sql server agent job failed and didn't store any error message in the job history. Is there any other table from where I can get the error info????
    The job has sql_message_id = 16389 and sql_severity = 16. What does they mean????

    this link will solve your problem
    http://www.sqlservercentral.com/articles/SQL+Server+Agent/67726/
    ebro
    CREATE PROCEDURE pr_GetStepFailureData
    @JobName VARCHAR(250)
    AS
    This procedure gets failure log data for the failed step of a SQL Server Agent job
    DECLARE @job_id UNIQUEIDENTIFIER
    SELECT @job_id = job_id FROM dbo.sysjobs WHERE [name] = @JobName
    SELECT 'Step ' + CAST(JH.step_id AS VARCHAR(3)) + ' of ' + (SELECT CAST(COUNT(*) AS VARCHAR(5)) FROM dbo.sysjobsteps WHERE job_id = @job_id) AS StepFailed,
    CAST(RIGHT(JH.run_date,2) AS CHAR(2)) + '/' + CAST(SUBSTRING(CAST(JH.run_date AS CHAR(8)),5,2) AS CHAR(2)) + '/' + CAST(LEFT(JH.run_date,4) AS CHAR(4)) AS DateRun,
    LEFT(RIGHT('0' + CAST(JH.run_time AS VARCHAR(6)),6),2) + ':' + SUBSTRING(RIGHT('0' + CAST(JH.run_time AS VARCHAR(6)),6),3,2) + ':' + LEFT(RIGHT('0' + CAST(JH.run_time AS VARCHAR(6)),6),2) AS TimeRun,
    JS.step_name,
    JH.run_duration,
    CASE
    WHEN JSL.[log] IS NULL THEN JH.[Message]
    ELSE JSL.[log]
    END AS LogOutput
    FROM dbo.sysjobsteps JS INNER JOIN dbo.sysjobhistory JH
    ON JS.job_id = JH.job_id AND JS.step_id = JH.step_id
    LEFT OUTER JOIN dbo.sysjobstepslogs JSL
    ON JS.step_uid = JSL.step_uid
    WHERE INSTANCE_ID >
    (SELECT MIN(INSTANCE_ID)
    FROM (
    SELECT top (2) INSTANCE_ID, job_id
    FROM dbo.sysjobhistory
    WHERE job_id = @job_id
    AND STEP_ID = 0
    ORDER BY INSTANCE_ID desc
    ) A
    AND JS.step_id <> 0
    AND JH.job_id = @job_id
    AND JH.run_status = 0
    ORDER BY JS.step_id
    EXEC pr_GetStepFailureData 'JobName'

Maybe you are looking for

  • Facebook notifications on ipad not working

    facebook notifications on ipad not working or displaying on ipad notifications. They are displaying on the iPhone though. Is this normal behavior or perhaps I have missed something.? Please let me know Thank you

  • OBIEE 11.1.1.7.0 Skin and Style changes

    Dear All, I want to customize the skin and style in OBIEE 11.1.1.7.0. I already did for OBIEE 11.1.1.5.0 and having fair idea of files and location for customization. I used to copy s_blafp and sk_blafp from /opt/xyz/obiee/Oracle_BI1/bifoundation/web

  • Error while adding essbase server(V9.3.1)

    Hi All, I have installed all essbase related products and configured everything thru configuration utility successfully.But while adding essbase server through ADMINISTARTION SERVICES CONSOLE,I got the following error. ===============================

  • Payment terms for a particular sales order

    Dear Gurus,   I want to trigger a particular payment term for a particular sales document type. i.e., when i create a sales order with this particular sales document type the desired payment term should be copied into sales order by default. Is this

  • How to view the flashwebsites in iphone3g

    hi, safari is not opening the flash websites, is it possible to view the flash website....