AD Group Discovery - Limit to an OU & run Full Discovery now

Hi folks,
I have a doubt ....the ConfigMgr Admin Console UI lets me do these two things (attached screenshots):
 1. First I can add discovery scopes for the Group Discovery 
 2. Run the Full discovery now.
I was wondering if both are possible through the PowerShell too. I didn't see the parameter with Set-CMDiscoveryMethod cmdlet to specify a AD Container or Group as a Discovery Scope. Nor I could find a cmdlet/method to "Run Full Discovery Now"
I tried looking at the SDK
here too but no luck.
Any pointers ? Is this possible ?
Thanks and Regards.
EDIT: I was able to do these and have posted my findings at:
http://dexterposh.blogspot.in/2014/02/powershell-sccm-2012-r2-discovery.html
Hope this helps someone along the way.
Knowledge is Power{Shell}.

Hi,
I think the command you are looking for Set-CMDiscoveryMethod
use $Schedule variable at below
PS C:\> $Schedule = New-CMSchedule -Nonrecurring [-IsUtc] [-ScheduleString] [-Start <DateTime> ] [ <CommonParameters>]
Set-CMDiscoveryMethod -ActiveDirectoryGroupDiscovery -SiteCode "XYZ" <add other parameters> -PollingSchedule $Schedule
Full command below
Set-CMDiscoveryMethod -ActiveDirectoryGroupDiscovery -SiteCode <String> [-DeltaDiscoveryIntervalMinutes <Int32> ] [-DiscoverDistributionGroupsMembership <Boolean> ] [-Enabled <Boolean> ] [-EnableDeltaDiscovery <Boolean> ] [-EnableFilteringExpiredLogon
<Boolean> ] [-EnableFilteringExpiredPassword <Boolean> ] [-PollingSchedule <IResultObject> ] [-TimeSinceLastLogonDays <Int32> ] [-TimeSinceLastPasswordUpdateDays <Int32> ] [-Confirm] [-WhatIf] [ <CommonParameters>]
Set-CMDiscoveryMethod :http://technet.microsoft.com/en-us/library/jj870938(v=sc.10).aspx
New-CMSchedule :http://technet.microsoft.com/en-us/library/jj850116(v=sc.20).aspx
San

Similar Messages

  • Group member limit on subscriptions?

    Hello everyone,
    We have been using the subscription feature in KM for awhile now, both for subscribing individual users and subscribing UME groups.  This has been working well, so now we have a requirement to subscribe a UME group with a larger member list to a folder...
    This UME group has ~250 members.  When I tried to subscribe the group, I got an error about the selected channel being invalid for this type of group (the channel we use is Email).  So I took a bunch of users out of the group (took the group size down to ~15) and the subscription saved successfully.
    We currently have a group of 89 people subscribed to a folder (this group has grown over time) and I believe this is working successfully, although now I'm wondering.... I want to go back and check in with some of the folks in this group to ensure they're getting the subscription emails.  <b>Does anyone know of a group size limit for email subscription notifications?</b>  Has anyone else run into this and perhaps have a workaround or advice?
    Thanks in advance for the help!  It's always appreciated. 
    - Fallon

    Thank you for the input, Trevor!
    We removed most of the users from the UME group, subscribed the UME group to the folder, and then re-populated the group.... That worked fine, and the subscription emails are being sent.
    I was happy to hear you have a large group subscribed as well.  It's nice to have that verification. 
    Thanks again!
    Fallon

  • How to tackle Forward limit active without stop running vi

    how to tackle Forward limit active without stop running vi: Our robot is comprise of with 8 joint,to limit the robot work area we use the limit botton But now the problem is whenever the limit button is active the vi will stop. i had decompose the errot clust.But i doesnot work ,Is there any good method to solve this problem Thanks!!

    Please provide more information about the hard- and software you are using:
    - NI-Motion board type
    - NI-Motion software version
    - LabVIEW version
    - Simple example code that reproduces the problem
    Best regards,
    Jochen Klier
    Applications Engineering Group Leader
    National Instruments Germany GmbH

  • How to restrict users working on Windows 7 clients from accessing Windows Explorer and other systems in the network through Group Policy with a domain controller running on Windows Server 2008 r2

    Dear All,
    We are having an infrastructure setup of around 500 client computers managed through group policy.
    Recently the domain controllers have been migrated from Windows Server 2003 to Server 2008 R2.
    Since this account requires extremely strict environment, we need to figure the solution for restricting the users from access anything locally.
    It would be great if you can assist me with the following query.
    How to restrict users logged on Windows 7 clients from accessing Windows Explorer and browsing other systems in the network through Group Policy with a domain controller running on Windows Server 2008 r2 ?
    Can we disable Network Tab on the left hand pane ?
    explorer.exe is blocked already, but users are able to enter the Windows Explorer by clicking on the name which is visible on the Start Menu.

    >   * explorer.exe is blocked already, but users are able to enter the
    >     Windows Explorer by clicking on the name which is visible on the
    >     Start Menu.
    You cannot block explorer.exe when you do not replace the shell - the
    desktop you see effectively IS explorer.exe...
    Your requirement sounds like you need a custom shell:
    http://gpsearch.azurewebsites.net/#2812
    Martin
    Mal ein
    GUTES Buch über GPOs lesen?
    NO THEY ARE NOT EVIL, if you know what you are doing:
    Good or bad GPOs?
    And if IT bothers me - coke bottle design refreshment :))

  • Can I use ClassLoader to limit the number of running instances of my app?

    Hi
    I want to limit the number of running instances of my app to 1 (like what IDEA and... do), can I do that with a class loader? How?
    Thanks in advance,
    Behrang S.

    No.
    If you search the advanced forum you will find several discussions on how you can do it.

  • Cannot group multiple limit items in single local PO (ECS)

    We are on SRM 5.0 (SRM SERVER 5.5 SP 9) and are using SRM in the
    extended classic scenario.
    We use limit carts and want to be able to create a single PO with
    multiple limit items. However, it seems that the implementation of note
    1020305 as of SP 9 prevents the possibility to group limit items together in a
    single PO completely. I tried implementing the BADI 'BBP_GROUP_LOC_PO' but it had no affect.
    Can someone please provide some advice on how to group multiple limit items into a single PO?
    Thanks,
    Nick

    Hi Atul,
    Thanks for the note - it was useful, although I don't think I can get my issue resolved as it implies we cannot group multiple limits into one PO.
    Can you please explain what the sentence "Multiple packages are possible starting from SRM 5.0 with hierarchies."?
    What are hierarchies and how do they work? Will they allow us to have multiple limit items on one PO?
    Thanks again,
    Nick

  • Task fails while running Full load ETL

    Hi All,
    I am running full load ETL For Oracle R12(vanila Instance) HR But 4 tasks are failing SDE_ORA_JobDimention, SDE_ORA_HRPositionDimention, SDE_ORA_CodeDimension_Pay_level and SDE_ORA_CodeDimensionJob, I changed the parameter for all these task as mentioned in the Installation guide and rebuilled. Please help me out.
    Log is like this for SDE_ORA_JobDimention
    DIRECTOR> VAR_27028 Use override value [DataWarehouse] for session parameter:[$DBConnection_OLAP].
    DIRECTOR> VAR_27028 Use override value [ORA_R12] for session parameter:[$DBConnection_OLTP].
    DIRECTOR> VAR_27028 Use override value [9] for mapping parameter:[$$DATASOURCE_NUM_ID].
    DIRECTOR> VAR_27027 Use default value [] for mapping parameter:[mplt_BC_ORA_JobDimension.$$JOBCODE_FLXFLD_SEGMENT_COL].
    DIRECTOR> VAR_27027 Use default value [] for mapping parameter:[mplt_BC_ORA_JobDimension.$$JOBFAMILYCODE_FLXFLD_SEGMENT_COL].
    DIRECTOR> VAR_27027 Use default value [] for mapping parameter:[mplt_BC_ORA_JobDimension.$$LAST_EXTRACT_DATE].
    DIRECTOR> VAR_27028 Use override value [DEFAULT] for mapping parameter:[$$TENANT_ID].
    DIRECTOR> TM_6014 Initializing session [SDE_ORA_JobDimension_Full] at [Fri Sep 26 10:52:05 2008]
    DIRECTOR> TM_6683 Repository Name: [Oracle_BI_DW_Base]
    DIRECTOR> TM_6684 Server Name: [Oracle_BI_DW_Base_Integration_Service]
    DIRECTOR> TM_6686 Folder: [SDE_ORAR12_Adaptor]
    DIRECTOR> TM_6685 Workflow: [SDE_ORA_JobDimension_Full]
    DIRECTOR> TM_6101 Mapping name: SDE_ORA_JobDimension [version 1]
    DIRECTOR> TM_6827 [C:\Informatica\PowerCenter8.1.1\server\infa_shared\Storage] will be used as storage directory for session [SDE_ORA_JobDimension_Full].
    DIRECTOR> CMN_1805 Recovery cache will be deleted when running in normal mode.
    DIRECTOR> CMN_1802 Session recovery cache initialization is complete.
    DIRECTOR> TM_6703 Session [SDE_ORA_JobDimension_Full] is run by 32-bit Integration Service [node01_HSCHBSCGN20031], version [8.1.1], build [0831].
    MANAGER> PETL_24058 Running Partition Group [1].
    MANAGER> PETL_24000 Parallel Pipeline Engine initializing.
    MANAGER> PETL_24001 Parallel Pipeline Engine running.
    MANAGER> PETL_24003 Initializing session run.
    MAPPING> CMN_1569 Server Mode: [ASCII]
    MAPPING> CMN_1570 Server Code page: [MS Windows Latin 1 (ANSI), superset of Latin1]
    MAPPING> TM_6151 Session Sort Order: [Binary]
    MAPPING> TM_6156 Using LOW precision decimal arithmetic
    MAPPING> TM_6180 Deadlock retry logic will not be implemented.
    MAPPING> TM_6307 DTM Error Log Disabled.
    MAPPING> TE_7022 TShmWriter: Initialized
    MAPPING> TM_6007 DTM initialized successfully for session [SDE_ORA_JobDimension_Full]
    DIRECTOR> PETL_24033 All DTM Connection Info: [<NONE>].
    MANAGER> PETL_24004 Starting pre-session tasks. : (Fri Sep 26 10:52:13 2008)
    MANAGER> PETL_24027 Pre-session task completed successfully. : (Fri Sep 26 10:52:14 2008)
    DIRECTOR> PETL_24006 Starting data movement.
    MAPPING> TM_6660 Total Buffer Pool size is 32000000 bytes and Block size is 1280000 bytes.
    READER_1_1_1> DBG_21438 Reader: Source is [dev], user [apps]
    READER_1_1_1> BLKR_16003 Initialization completed successfully.
    WRITER_1_*_1> WRT_8146 Writer: Target is database [orcl], user [obia], bulk mode [ON]
    WRITER_1_*_1> WRT_8106 Warning! Bulk Mode session - recovery is not guaranteed.
    WRITER_1_*_1> WRT_8124 Target Table W_JOB_DS :SQL INSERT statement:
    INSERT INTO W_JOB_DS(JOB_CODE,JOB_NAME,JOB_DESC,JOB_FAMILY_CODE,JOB_FAMILY_NAME,JOB_FAMILY_DESC,JOB_LEVEL,W_FLSA_STAT_CODE,W_FLSA_STAT_DESC,W_EEO_JOB_CAT_CODE,W_EEO_JOB_CAT_DESC,AAP_JOB_CAT_CODE,AAP_JOB_CAT_NAME,ACTIVE_FLG,CREATED_BY_ID,CHANGED_BY_ID,CREATED_ON_DT,CHANGED_ON_DT,AUX1_CHANGED_ON_DT,AUX2_CHANGED_ON_DT,AUX3_CHANGED_ON_DT,AUX4_CHANGED_ON_DT,SRC_EFF_FROM_DT,SRC_EFF_TO_DT,DELETE_FLG,DATASOURCE_NUM_ID,INTEGRATION_ID,TENANT_ID,X_CUSTOM) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
    WRITER_1_*_1> WRT_8270 Target connection group #1 consists of target(s) [W_JOB_DS]
    WRITER_1_*_1> WRT_8003 Writer initialization complete.
    WRITER_1_*_1> WRT_8005 Writer run started.
    READER_1_1_1> BLKR_16007 Reader run started.
    READER_1_1_1> RR_4029 SQ Instance [mplt_BC_ORA_JobDimension.Sq_Jobs] User specified SQL Query [SELECT
    PER_JOBS.JOB_ID,
    PER_JOBS.BUSINESS_GROUP_ID,
    PER_JOBS.JOB_DEFINITION_ID,
    PER_JOBS.DATE_FROM,
    PER_JOBS.DATE_TO,
    PER_JOBS.LAST_UPDATE_DATE AS CHANGED_ON_DT,      PER_JOBS.LAST_UPDATED_BY, PER_JOBS.CREATED_BY, PER_JOBS.CREATION_DATE,
    PER_JOB_DEFINITIONS.LAST_UPDATE_DATE AS AUX1_CHANGED_ON_DT,
    PER_JOB_DEFINITIONS.JOB_DEFINITION_ID,
    PER_JOBS.NAME,
    PER_JOBS.JOB_INFORMATION1, PER_JOBS.JOB_INFORMATION3,
    PER_JOBS. AS JOB_FAMILY_CODE, PER_JOB_DEFINITIONS.  AS JOB_CODE,
      '0' AS X_CUSTOM
    FROM
    PER_JOBS, PER_JOB_DEFINITIONS
    WHERE
    PER_JOBS.JOB_DEFINITION_ID = PER_JOB_DEFINITIONS.JOB_DEFINITION_ID]
    WRITER_1_*_1> WRT_8158
    *****START LOAD SESSION*****
    Load Start Time: Fri Sep 26 10:53:05 2008
    Target tables:
    W_JOB_DS
    READER_1_1_1> RR_4049 SQL Query issued to database : (Fri Sep 26 10:53:05 2008)
    READER_1_1_1> CMN_1761 Timestamp Event: [Fri Sep 26 10:53:06 2008]
    READER_1_1_1> RR_4035 SQL Error [
    ORA-01747: invalid user.table.column, table.column, or column specification
    Database driver error...
    Function Name : Execute
    SQL Stmt : SELECT
    PER_JOBS.JOB_ID,
    PER_JOBS.BUSINESS_GROUP_ID,
    PER_JOBS.JOB_DEFINITION_ID,
    PER_JOBS.DATE_FROM,
    PER_JOBS.DATE_TO,
    PER_JOBS.LAST_UPDATE_DATE AS CHANGED_ON_DT, PER_JOBS.LAST_UPDATED_BY, PER_JOBS.CREATED_BY, PER_JOBS.CREATION_DATE,
    PER_JOB_DEFINITIONS.LAST_UPDATE_DATE AS AUX1_CHANGED_ON_DT,
    PER_JOB_DEFINITIONS.JOB_DEFINITION_ID,
    PER_JOBS.NAME,
    PER_JOBS.JOB_INFORMATION1, PER_JOBS.JOB_INFORMATION3,
    PER_JOBS. AS JOB_FAMILY_CODE, PER_JOB_DEFINITIONS. AS JOB_CODE,
    '0' AS X_CUSTOM
    FROM
    PER_JOBS, PER_JOB_DEFINITIONS
    WHERE
    PER_JOBS.JOB_DEFINITION_ID = PER_JOB_DEFINITIONS.JOB_DEFINITION_ID
    Oracle Fatal Error
    Database driver error...
    Function Name : Execute
    SQL Stmt : SELECT
    PER_JOBS.JOB_ID,
    PER_JOBS.BUSINESS_GROUP_ID,
    PER_JOBS.JOB_DEFINITION_ID,
    PER_JOBS.DATE_FROM,
    PER_JOBS.DATE_TO,
    PER_JOBS.LAST_UPDATE_DATE AS CHANGED_ON_DT, PER_JOBS.LAST_UPDATED_BY, PER_JOBS.CREATED_BY, PER_JOBS.CREATION_DATE,
    PER_JOB_DEFINITIONS.LAST_UPDATE_DATE AS AUX1_CHANGED_ON_DT,
    PER_JOB_DEFINITIONS.JOB_DEFINITION_ID,
    PER_JOBS.NAME,
    PER_JOBS.JOB_INFORMATION1, PER_JOBS.JOB_INFORMATION3,
    PER_JOBS. AS JOB_FAMILY_CODE, PER_JOB_DEFINITIONS. AS JOB_CODE,
    '0' AS X_CUSTOM
    FROM
    PER_JOBS, PER_JOB_DEFINITIONS
    WHERE
    PER_JOBS.JOB_DEFINITION_ID = PER_JOB_DEFINITIONS.JOB_DEFINITION_ID
    Oracle Fatal Error].
    READER_1_1_1> CMN_1761 Timestamp Event: [Fri Sep 26 10:53:06 2008]
    READER_1_1_1> BLKR_16004 ERROR: Prepare failed.
    WRITER_1_*_1> WRT_8333 Rolling back all the targets due to fatal session error.
    WRITER_1_*_1> WRT_8325 Final rollback executed for the target [W_JOB_DS] at end of load
    WRITER_1_*_1> WRT_8035 Load complete time: Fri Sep 26 10:53:06 2008
    LOAD SUMMARY
    ============
    WRT_8036 Target: W_JOB_DS (Instance Name: [W_JOB_DS])
    WRT_8044 No data loaded for this target
    WRITER_1__1> WRT_8043 ****END LOAD SESSION*****
    MANAGER> PETL_24031
    ***** RUN INFO FOR TGT LOAD ORDER GROUP [1], CONCURRENT SET [1] *****
    Thread [READER_1_1_1] created for [the read stage] of partition point [mplt_BC_ORA_JobDimension.Sq_Jobs] has completed. The total run time was insufficient for any meaningful statistics.
    Thread [TRANSF_1_1_1] created for [the transformation stage] of partition point [mplt_BC_ORA_JobDimension.Sq_Jobs] has completed. The total run time was insufficient for any meaningful statistics.
    Thread [WRITER_1_*_1] created for [the write stage] of partition point [W_JOB_DS] has completed. The total run time was insufficient for any meaningful statistics.
    MANAGER> PETL_24005 Starting post-session tasks. : (Fri Sep 26 10:53:06 2008)
    MANAGER> PETL_24029 Post-session task completed successfully. : (Fri Sep 26 10:53:06 2008)
    MAPPING> TM_6018 Session [SDE_ORA_JobDimension_Full] run completed with [0] row transformation errors.
    MANAGER> PETL_24002 Parallel Pipeline Engine finished.
    DIRECTOR> PETL_24013 Session run completed with failure.
    DIRECTOR> TM_6022
    SESSION LOAD SUMMARY
    ================================================
    DIRECTOR> TM_6252 Source Load Summary.
    DIRECTOR> CMN_1740 Table: [Sq_Jobs] (Instance Name: [mplt_BC_ORA_JobDimension.Sq_Jobs])
         Output Rows [0], Affected Rows [0], Applied Rows [0], Rejected Rows [0]
    DIRECTOR> TM_6253 Target Load Summary.
    DIRECTOR> CMN_1740 Table: [W_JOB_DS] (Instance Name: [W_JOB_DS])
         Output Rows [0], Affected Rows [0], Applied Rows [0], Rejected Rows [0]
    DIRECTOR> TM_6023
    ===================================================
    DIRECTOR> TM_6020 Session [SDE_ORA_JobDimension_Full] completed at [Fri Sep 26 10:53:07 2008]

    To make use of the warehouse you would probably want to connect to an EBS instance in order to populate the warehouse.
    Since the execution plan you intend to run is designed for the EBS data-model. I guess if you really didn't want to connect to the EBS instance to pull data you could build one using the universal adapter. This allows you to load out of flat-files if you wish, but I wouldn't reccomend making this a habit for actual implementation as it does create another potential point of failure (populating the flat-files).
    Thanks,
    Austin

  • I have a private Adobe ID and I have a Lightroom Testversion running there! Now my university gave me access to Creative Cloud for Teams and I want to use my Lightroom catalogues with this ID! How can I do this?

    I have a private Adobe ID and I have a Lightroom Testversion running there! Now my university gave me access to Creative Cloud for Teams and I want to use my Lightroom catalogues with this ID! How can I do this?

    Hi,
    thank you, it’s done! The Chat helped me! Now I cannot go into this forum anymore with my old password, but that’s not a big problem.
    Best wishes,
    Christian
    Von: JimHess <[email protected]<mailto:[email protected]>>
    Antworten an: "[email protected]<mailto:[email protected]>" <[email protected]<mailto:[email protected]>>
    Datum: Montag, 20. April 2015 18:38
    An: Christian Kogler <[email protected]<mailto:[email protected]>>
    Betreff:  I have a private Adobe ID and I have a Lightroom Testversion running there! Now my university gave me access to Creative Cloud for Teams and I want to use my Lightroom catalogues with this ID! How can I do this?
    I have a private Adobe ID and I have a Lightroom Testversion running there! Now my university gave me access to Creative Cloud for Teams and I want to use my Lightroom catalogues with this ID! How can I do this?
    created by JimHess<https://forums.adobe.com/people/JimHess> in Photoshop Lightroom - View the full discussion<https://forums.adobe.com/message/7454915#7454915>

  • HT201628 so now i've followed this help now i can't re download itunes. i'm on 10.5.8. Still runs great but now i have no itunes. please can a suggestion be made that doesn't involve spending hundreds of pounds? i was only trying to get an iphone connecte

    i'm on MAC OS X 10.5.8. Still runs great but now i have no itunes as i've followed this post that covers my first issue. i've followed it correctly to the part of re install the latest version of itunes and i cant.
    i've been running logic pro on this mac and i have no issues so i've not upgraded software etc.
    i need a way around this problem without having to upgrade the OS if there is one.
    does anyone please have a suggestion that doesn't involve spending hundreds of pounds?
    i was only trying to get an iphone connected and this article says 10.6.8 or earlier?? it is definately earlier
    macbook - MAC OS X v 10.5.8...... trying to connect an iphone 4s os 6
    problem now..........need to re install itunes but can't probably due to my current OS.
    please help. i'm pretty gutted about this. i should of researched a little further me thinks. cheers guys

    said article below....
    HT1747: iTunes: How to remove and reinstall the Apple Mobile Device Service on Mac OS X 10.6.8 or Earlier
    Learn about iTunes: How to remove and reinstall the Apple Mobile Device Service on Mac OS X 10.6.8 or Earlier

  • Photoshop v7.0 I get the error "missing or invalid personalization information"  This used to run, I'm now on Windows 7 64 bit.  Anyone have any ideas please?  Thanks.

    I get the error "missing or invalid personalization information"  This used to run, I'm now on Windows 7 64 bit.  Anyone have any ideas please?  Thanks.

    Hi,
    If my memory serves me right (it’s some while since I changed and I have not used Photoshop for some years…) I was running it on XP.  I upgraded to Win 7 on  a new machine but ported across files, programmes etc.  I cannot recall if Photoshop ever ran on Win 7.
    I have tried to reinstall but the CD is warped and despite applying pressure to it neither of my drives likes it.  I have Photoshop 7 on my third hard drive, a portable drive, which came from the XP machine.
    Sorry, long-winded but I hope it helps.

  • SMS_DISCOVERY_DATA_MANAGER Message ID 2636 and 620. Discovery Data Manager failed to process the discovery data record (DDR)

    Hi
    I'm seeing this critical error on my primary.
    SMS_DISCOVERY_DATA_MANAGER Message ID 2636 and 620. 
    Discovery data manager failed to process the discovery data record (DDR)"D:\Prog.....\inboxes\auth\ddm.box\userddrsonly\adu650dh.DDR", because it cannot update the data source.
    Where these ddr's are actually under ddm.box\userddrsonly\BAD_DDRS folder
    I see a ton of DDR files in that folder. Not sure if I can delete them, so I moved them to a temp folder. AD User discovery keeps generating them.
    Any help ?
    Thanks
    UK
    

    Check the ddm.log file for more information.
    My Blog: http://www.petervanderwoude.nl/
    Follow me on twitter: pvanderwoude

  • Stopped server while running full synchronization of SQL MA

    Hi Everyone,
    I am currently facing an issue in the Sync server where the Full Sync is showing "Stopped server" while running Full Synchronization of SQL MA and this is not happending regularly as it is showing the error message 3 times if it runs 10 times in
    a week and rest 7 times its running fine.
    What could be the reason why this is occuring?
    Your response will be highly appreciated
    Thanks,
    Aman

    Hi Nosh,
    My first profiler is FI & FS then i am running FS in which i am facing this issue of stopped server and second thing is that the above thing is running absolutly fine with ILM but in FIM its shwoing this error and this error is not permanent as 
    it is failing two times then third time it is running perfectly.
    Please suggest
    Thanks,
    Aman

  • My all reports are running very slow now, I am using Reports 6i?

    Dear Friends,
    My all reports are running very slow now, I am using Reports 6i. A few months ago there speed is better. Please suggest me the solution of this problem.
    Best regards,
    Shahzad

    Get statspack/AWR running against your database and analyze the reports. 15 minutes interval between snaps should be enough.
    That should give you a clue as to what is going wrong in your database.
    Since you are saying that the reports speed was better a few months ago, it is possible that you are suffering from "data growth".

  • TS3276 hi after updating to Mountain Lion, it becomes difficult to quit Mail, always have to Force Quit. And while Mail is on standby, the fan keeps on running full power, the laptop is hot. Why is this happening?

    hi after updating to Mountain Lion, it becomes difficult to quit Mail, always have to Force Quit. And while Mail is on standby, the fan keeps on running full power, the laptop is hot. Why is this happening?

    If your computer was hot it wasn't asleep - it either didn't go to sleep or it work up after you put it to sleep. I've seen both things happen. First, read this article and follow the instructions.
    1) For the next few days/weeks pay attention to the blinking light on the front of your computer when you close the lid. Don't put the computer into your backpack/case until it begins to blink. Wait a minute or so after it starts to blink to make sure it doesn't stop blinking - the signal it woke back up.
    2) having followed the above instructions, if you take the computer out and it is hot again or if you take it out and discover it has powered down (you'll know this because when it starts up the battery will be drained and you'll see a progress bar over a ghostly display that will eventually become normal) immediately start the Console program and perform a search on 'sleep' and then 'wake'. This should bring up all the recent log files about sleep state changes (going to sleep and waking up). You should see one for when you put the computer to sleep and put it away and the next one should be just after you got it out. You'll find at least one though that shouldn't be there. That will tell you when the computer work up (when it shouldn't have) and might give a clue as to why.
    One of the whys could be a magnet that is making the computer think the lid has been opened.

  • /private/var/tmp running full with cache

    We're importing a lot of JPEG and Photoshop pictures and in a big job the folder /var/tmp (locally on the FCServer, who is also the only Compressor) is running full with cache files.
    A lot of these:
    -rw------- 1 admin wheel 212310770 9 jun 09:38 fcsvrxmp_imgio_cachetuAko5
    -rw------- 1 admin wheel 206195341 9 jun 09:38 fcsvrxmp_imgio_cachevjmnyL
    -rw------- 1 admin wheel 202760422 9 jun 09:38 fcsvrxmp_imgio_cacheh66AYm
    -rw------- 1 admin wheel 200220075 9 jun 09:38 fcsvrxmp_imgio_cacheBEuj54
    -rw------- 1 admin wheel 14354020 9 jun 09:38 fcsvrxmp_imgio_cacheoEhooq
    -rw------- 1 admin wheel 268042240 9 jun 09:36 fcsvrxmp_imgio_cacheBLu5fu
    -rw------- 1 admin wheel 104792064 9 jun 09:36 fcsvrxmp_imgio_cachecQeFLO
    -rw------- 1 admin wheel 106102784 9 jun 09:36 fcsvrxmp_imgio_cachehU9Dje
    -rw------- 1 admin wheel 268435456 9 jun 09:36 fcsvrxmp_imgio_cachem77wYW
    -rw------- 1 admin wheel 106037248 9 jun 09:36 fcsvrxmp_imgio_cachenXrHzK
    -rw------- 1 admin wheel 89784320 9 jun 09:36 fcsvrxmp_imgio_cacheggr04X
    -rw------- 1 admin wheel 80150528 9 jun 09:36 fcsvrxmp_imgio_cacheilEkN0
    -rw------- 1 admin wheel 224499080 9 jun 09:36 fcsvrxmp_imgio_cachexpzbwB
    -rw------- 1 admin wheel 210338595 9 jun 09:36 fcsvrxmp_imgio_cacheaBpQJj
    -rw------- 1 admin wheel 208639941 9 jun 09:36 fcsvrxmp_imgio_cacheKVluet
    -rw------- 1 admin wheel 268435456 9 jun 09:34 fcsvrxmp_imgio_cacheIvAtRi
    -rw------- 1 admin wheel 260833280 9 jun 09:34 fcsvrxmp_imgio_cacheM2NNbU
    -rw------- 1 admin wheel 261554176 9 jun 09:34 fcsvrxmp_imgio_cacheRb4Zey
    -rw------- 1 admin wheel 161349632 9 jun 09:34 fcsvrxmp_imgio_cachenc145z
    -rw------- 1 admin wheel 217251840 9 jun 09:34 fcsvrxmp_imgio_cache8HN8Ne
    -rw------- 1 admin wheel 157089792 9 jun 09:34 fcsvrxmp_imgio_cache8gAH9B
    -rw------- 1 admin wheel 23509689 9 jun 09:34 fcsvrxmp_imgio_cacheRyJCi5
    -rw------- 1 admin wheel 213979433 9 jun 09:33 fcsvrxmp_imgio_cacheDaB5ox
    -rw------- 1 admin wheel 22190308 9 jun 09:33 fcsvrxmp_imgio_cache3Fbjf7
    As you can see, big files.. So big that the last 2 days my server startup HD filled almost full (1 GB left).
    So it looks like FCServer is not cleaning up after his self. Can I do something about this? At least change the max caches size?
    Thanks

    Hi,
    It was a corrupted PSD, one file corrupt all the other ones after that one also failed.... we removed the corrupt one and 'Analyze' all the other ones.
    Problem solved??

Maybe you are looking for