ERE for AD is pending state only

Experts,
I created two MA in newly installed FIM 2010 R2.
1. FIM Service MA
2. ADMA
I create users in FIM Service manually. When i run import and synch, i can see users in metaverse.
Then I created. Set>>workflow>>outbound synch rule for AD>MPR.
I am moving users into set manually. I can see the 'AD Bound Rule' applied in ERE and status pending.
Status is pending only although I ran all combination of 'run profiles'. no error during run of any
MA.
I think I created right mapping in 'outbound synch rule' and choose the option 'create resource in external system'.
I also enabled provisioning in FIM Synchronization service.
Any idea please.
Thanks,
Mann

Thank Cameron!
I see ERE status changed to 'not applied' in my case.
How can I track where is the error in AD provisioning.
Also according to my understating before account gets exported to AD it will be visible in connector space of AD.
in which steps (steps mentioned by you) account will be visible in connector space of AD?
Thanks,
Mann 

Similar Messages

  • Setting Sales Tax for in State ONLY??

    What we have here is a failure to communicate...  Either it is a language problem with support or they can't explain the feature.
    All the answers I 'm getting are NOT answers to the question I'm asking. And the document is not clear either.
    I have clients who sell products who ONLY need to charge sales tax to customers in the same state (PA)  but BC is charging 6% to all customers regardless of location.  We realize now that we should not have set the tax on individual products to PA 6%  (although that is VERY counterintuitive), and we set up our tax code for the USA with all states being zero and pa being 6%.   Tech cannot tell me how I should be setting taxes globally for products. IT is almost as if this feature does not exist:
    ME:   No,    This is not acceptable nor does it make any sense.
    In the USA, for most companies,  we only charge sales tax to In state customers, not out of state customers.  Therefore we need to be able to set up a global tax NOT on the product and NOT on the Country, but upon the state ONLY.
    If this option is not available, it is a worthless system.
    Parikshit Nath (Adobe Business Catalyst Support) 
    Dec 20 09:10
    There are two proper ways to add tax.
    1. Add tax to shipping options: In this case, you create tax codes and shipping options too. Now, you add tax code to the shipping option as per the customer's destination country. Add a shipping option, and select the tax code in the shipping option. This is the more widely used method of applying tax. From what I had seen, you didn't create any shipping options yet, hence the confusion.
    2. Add tax directly to the product: This is what you had done. This will, as you have already mentioned, apply the tax to the product, irrespective of the customer's address details.
    Hope this clarifies the situation.
    Cheers.
    You can set it as not applicable. If you apply tax to the product level, it will apply the tax to all orders for that product. To apply tax to all orders regardless of the visitor's location, you can apply tax at a product level. Check this document:http://helpx.adobe.com/business-catalyst/partner/tax-codes.html

    Your question is really inappropriate to the Apple Discussions. The AD is a user-to-user forum for helping with technical problems and questions for Apple products. Sales taxes is certainly not one of those topics.
    However, to answer your question. If your friend purchases a Mac in the Apple store or if you purchase it via the online store taxes will be charged. Sales taxes in the US are typically between 6-8 percent of the purchase cost depending upon the state in which it is purchased. There are a few states that do not levy sales taxes such as Nevada and Montana.
    In addition when your friend takes the computer into your country you will likely be charged with import duties and taxes as well. If it's your expectation that this ploy will save you some money I wouldn't count on it.

  • Ola script for update stats only

    I want ola scripts for update stats only but on internet they are rebuilding index as well.
    http://ola.hallengren.com/sql-server-index-and-statistics-maintenance.html
    Thanks

    By default index rebuild will gather index stats.
    If you just need a script update index and column statistics you can refer the below link
    http://www.sqlusa.com/bestpractices2005/administration/updatestatistics/
    You can refer same ola script to update the stats
    http://ola.hallengren.com/sql-server-index-and-statistics-maintenance.html
    C. Update statistics on all user databases
    EXECUTE dbo.IndexOptimize
    @Databases = 'USER_DATABASES',
    @FragmentationLow = NULL,
    @FragmentationMedium = NULL,
    @FragmentationHigh = NULL,
    @UpdateStatistics = 'ALL'
    D. Update modified statistics on all user databases
    EXECUTE dbo.IndexOptimize
    @Databases = 'USER_DATABASES',
    @FragmentationLow = NULL,
    @FragmentationMedium = NULL,
    @FragmentationHigh = NULL,
    @UpdateStatistics = 'ALL',
    @OnlyModifiedStatistics = 'Y'
    --Prashanth

  • Pending State... for a day

    I have an H.264 Blue Ray file I have exported from Premiere that is 22G. When I try to build the disc I keep getting an error about how it is in "pending state". I happened to have to do something and just left the program going thinking Encore just needed to do it's thing but that's been a day now and I still get the some message. Any ideas?

    If exported as H.264 Bluray, you should use settings that allow it to show as "Do Not Transcode." The file may be a bit too big, depending on what else you have in the project.
    Unrelated to that, it helps troubleshooting to use the "transcode now" option to separate problems with transcoding the asset from other issues in the project.

  • How to find list of pending statements ????

    How to find list of pending statements ? I want to show a report using SALV_WD_TABLE component for providing basic ALV features like sorting, filtering, exporting to excel sheet. ?

    Aisurya Kumar Puhan wrote:
    Hi kris,
    > No not any standard component using for this.. I need  only to know in which tables i can find these.....
    >
    > Thkx
    > Aisurya.
    You need to elaborate your requirement.
    this is not leading anywhere.
    thanks
    sarbjeet singh

  • Use of virtual cube 0FIGL_V10 for reporting on Financal Statements

    Dear all,
    I am new to the GL reporting and I have a question about the usage of the virtual cube for reporting on Financial Statements.
    On the content site I see that you can report your Balance Sheet based on the infocube 0FIGL_C10, where you can use the infoobject 0GLACCOUNT to display the balances.
    I also installed the Virtual Cube 0FIGL_V10, which has the infoobject 0GLACCEXT to show the Financial Statement Item hierarchy.
    My questions are:
    - to create a Balance Sheet Report, is it advisable to use the content query 0FIGL_C10_Q001?
    If I run that query now, it is showing the data on seperate G/L accounts, where I would like to see a hierarchy display.
    For reporting on the capital expenditure however the requirement is that some G/L acoounts don't have to be taken into account. How can I exclude those then from the hierarchy for that report?
    - to create the P&L, advice on the content site is to use the virtual cube because you have the Financial Statement Item hierarchy available there and you can report on specific financial statements.
    Can anyone tell me the added value of using the Virtual Cube? Or can I skip it and just add the Financial Statement Item hierarchy in the 0FIGL_C10 cube and create a report where I select the nodes of the hierarchy that are only applicable for the P&L? Will that corrupt the figures or have any other effect?
    Thanks in advance for the help!

    Hi Sundar,
    As ravi said the VC can be used when we have leass number of users, basically in VC the structure will be available. And moreover when you generetae a report based on Multiprovider which is based on the VC then it will pick up the data directly from the source to which it has been connected..
    Hope it helps..
    ***Assign Points***
    Thanks,
    Gattu

  • Child CPs are always in PENDING STATE.

    Procedure parent_cp (errbuf out nocopy varchar2, retcode out nocopy varchar2) IS
    ret number;
    i number;
    BEGIN
    fnd_msg_pub.initialize;
    BEGIN ---Block A
    req_data := fnd_conc_global.request_data;
    if (req_data is not null) then
    i := to_number(req_data);
    if (i < 5 ) then
    errbuf := 'Done!';
    retcode := 0 ;
    return;
    end if;
    else
    i := 1;
    end if;
    for j in 1 .. 4 loop
    vRequestId(j) := fnd_request.submit_request('CZ', 'Child','Delete Localized Text - Child Number : ' ||TO_CHAR(vChildNo), NULL,TRUE, vChildMdlRange);
    fnd_conc_global.set_req_globals(conc_status => 'PAUSED', request_data => to_char(vChildNo)) ;
    IF (vRequestId(j) = 0 ) THEN
    errbuf := fnd_Message.get;
    retcode := 2;
    ELSE
    errbuf := 'Sub-Request submitted!';
    retcode := 0 ;
    END IF;
    END LOOP;
    END;
    BEGIN ---block B
    For j in vRequestId.FIRST..vRequestId.LAST LOOP
    fnd_file.put_line(fnd_file.log,' reuest' || vRequestId(j));
    vrequeststatus := fnd_concurrent.get_request_status(vRequestId(j),
    NULL,
    NULL,
    phase,
    status,
    dev_phase ,
    dev_status ,
    message );
    WHILE (dev_phase != 'COMPLETE') LOOP
    fnd_file.put_line(fnd_file.log,' while loop' || vRequestId(j));
    vrequeststatus := fnd_concurrent.wait_for_request(vRequestId(j),
    60,
    10,
    phase ,
    status ,
    dev_phase ,
    dev_status ,
    message );
    END LOOP;
    END LOOP;
    dbms_output.put_line(' Block after submitting Child CP ');
    END;
    END parent_cp;
    The above procedure was the Parent CP. Here the problem is in fnd_request.submit_request('CZ', 'Child','Delete Localized Text - Child Number : ' ||TO_CHAR(vChildNo), NULL,TRUE, vChildMdlRange); i have given sub_request as True and used fnd_conc_global.set_req_globals(conc_status => 'PAUSED', request_data => to_char(vChildNo)) ; to make parent CP to pause it.
    It submits 4 child CPs as expected but the phase as INACTIVE and status NO MANAGER and PARENT CP was always in running state.
    If i make sub_request parameter of fnd_request.sub_request to FALSE . It submits 4 child CPs as expected with the phase as PENDING and status NORMAL and PARENT CP was always in running state. But child cps are never changing the Phase to RUNNING. It is always in PENDING STATE.
    Please suggest how to use fnd_conc_global.set_req_globals and fnd_concurrent.wait_for_request together.

    Pl do not post duplicates - Parent Concurrent Program executes  rest of the logic  before PAUSED STATE.

  • Reporting Service Integration mode and Key not valid for use in specified state

    I had to uninstall SharePoint Foundation 2013 and reinstall it.
    What I have done:
    I uninstalled SharePoint and after that I removed all databases.
    I reinstalled SharePoint Foundation 2013 and it completed without errors.
    Everything else is working, except SSRS in Sharepoint Integrated Mode.
    If I check Central Administration - Manage services on server, I can found out that SQL Server Reporting Service Service is
    "Started". But if I try to create new service application (SQL Server Reporting Services Service Application), it gives me an error message:
    "Key not valid for use in specified state. (Exception from HRESULT: 0x8009000B)"
    I uninstalled SSRS in Sharepoint Integrated Mode and I reinstalled it but problems exists.
    It seems that "Encryption Keys" cause this problem.
    https://support.microsoft.com/kb/955757
    https://msdn.microsoft.com/en-us/library/ms156010%28v=sql.110%29.aspx
    I tried to Re-create Encryption Keys and Delete keys, but its not working either. It gives me only error message that:
    "Unable to locate the Reporting Server Windows service for instance MSSQLSERVER."
    I don't have any reporting server databases, because I removed all
    databases when I uninstalled SharePoint. Also I don't have Encryption
    Keys backups.
    I have found several similar question but no answer, example here:
    https://social.technet.microsoft.com/Forums/ie/en-US/df02dc05-5ce8-499d-9ba3-ab392a5fc3af/sharepoint-2013-ssrs-application-error-key-not-valid-for-use-in-specified-state-exception?forum=sharepointdevelopment
    Any ideas how I can fix this problem?

    Hi,
    From the Reporting Server Configuration Manager you have to restore the encryption key.
    Please try to delete databases(reportserver & reportservertempdb) and delete the reportserver and reports site on IIS. Then you have to start over the configuration again.
    If the issue still exists, Please check ULS log to see if anything unexpected occurred.
    For SharePoint 2013, by default, ULS log locates in
    "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\15\LOGS"
    More information about SQL Reporting services Installation with SharePoint 2013 for your reference:
    http://expertsharepoint.blogspot.de/2014/03/sql-reporting-services-installation.html
    Best Regards,
    Dennis Guo
    TechNet Community Support
    Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact
    [email protected]

  • Scheduled jobs are in released state only

    Dear Experts,
    we shedule jobs in background periodically. But for a particular time, all scheduled jobs are in released state only. Please suggest what might be the issue. Even some SAP std jobs have have not started.

    Hi Raghavendra,
    Have you checked enough BTC work process available in your SAP system.
    Use SM65 transaction,goto =>Additional Tests ;check mark "Determine work process status" and "Determine no of jobs in queue"
    and see the test results for any problems.
    Best Regards,
    Shyam Dontamsetty

  • How do I place an image and insert a different photo for the "mouse down" state?

    I have placed an image within an accordian widget and when I select the "mouse down" area in the states dialog box, i click "fill" in the toolbar and insert the photo i want to display when the image is clicked but this image is "covered up" by the originally placed image and is not seen in my states dialog box.
    I have done this before when i made rectangles and placed images in rectangles. But i soon realized that you cannot add alternative text to images in rectangles for some frustrating reason.
    How do I place and image and have a different image for the mouse down state?

    Hello,
    This effect can only be achieved when you use the images as a Rectangle Fill. And as you mentioned it above, you cannot add alternative text to images because it is added as a fill and not as a image.
    I would suggest you to post this as a feature request on our "Ideas for features in Adobe Muse" section of the forums : http://forums.adobe.com/community/muse/ideas
    Regards,
    Sachin

  • Testing Process for Gathering Single Object stats.

    Hello Oracle Experts,
    I work a critical system and due to some high stakes all and every change is very heavily scrutinized here whatever the level is. And one of such changes which is currently under scrutiny is gathering object stats for single objects. Just to give you a background its an Oracle eBusiness site so fnd_stats is used instead of usual dbms_stats and we've an inhouse job that depending on the staleness of the objects gather stats on them using FND_STATS. (RDBMS : 10.2.0.4 Apps Release 12i).
    Now, we've seen that occasionally it leaves some of the objects that should ideally be gathered so they need to be gathered individually and our senior technical management wants a process around it - for gathering this single object stats (I know!). I think I need to explicitly mention here that this need to gather stale object stats has emerged becs one of the plans has gone pretty poor (from 2 ms to 90 mins) and sql tuning task states that stats are stale and in our PROD copy env (where the issue exists) gathering stats reverts to original good plan! So we are not gathering just because they are stale but instead because that staleness is actually causing a realtime problem!
    Anyway, my point is that it has been gathered multiple times in the past on that object and also it might get gathered anytime by that automatic job (run nightly). There arguments are:
    i. There may be several hundred sql plans depending on that object and we never know how many, and to what, those plan change and it can change for worse causing unexpected issues in the service!
    ii. There may be related objects whose objects have gone stale as well (for example sales and inventory tables both see related amount of changes on column stock_level) and if we gather stats only on one of them and since those 2 cud be highly related (in queries etc.) that may mess up the join cardinality etc. messing up the plans etc.
    Now, you see they know Oracle as well !
    My Oracle (and optimizer knowledge) clearly suggests me that these arguments are baseless BUT want to keep an open mind. So my questions are :
    i. Do the risks highlighted above stand any ground or what probably do you think is there of happening any of the above?
    ii. Any other point that I can make to convince the management.
    iii. Or if those guys are right, Do you guys use or recommend any testing strategy/process that you can suggest to us pls?
    Another interesting point is that, they are not even very clear at this stage how they are gonna 'test' this whole thing as the 'cost' option like RAT (Real Application Testing) is out of question and developing an inhouse testing tool still need analyzing in terms of efforts, worth and reliability.
    In the end, Can I request top experts from the 'Oak Table' network to make a comment so that I can take their backings!? Well I am hoping here they'll back me up but that may not necessarily the case and I obviously want an honest expert assessment of the situation and not merely my backing.
    Thanks so much in advance!

    >
    I work a critical system and due to some high stakes all and every change is very heavily scrutinized here whatever the level is.
    Another interesting point is that, they are not even very clear at this stage how they are gonna 'test' this whole thing as the 'cost' option like RAT (Real Application Testing) is out of question and developing an inhouse testing tool still need analyzing in terms of efforts, worth and reliability.Unfortunately your management's opinion of their system as expressed in the first paragraph is not consistent with the opinion expressed in the second paragraph.
    Getting a stable strategy for statistics is not easy, requires careful analysis, and takes a lot of effort for complex systems.
    >
    In the end, Can I request top experts from the 'Oak Table' network to make a comment so that I can take their backings!? Well I am hoping here they'll back me up but that may not necessarily the case and I obviously want an honest expert assessment of the situation and not merely my backing.
    The ideal with stats collection is to do something simple to start with, and then build on the complex bits that are needed - something along the lines suggested by Dan Morgan works: a table driven approach to deal with the special cases which are usually: the extreme indexes, the flag columns, the time-based/sequential columns, the occasional histogram, and new partitions. Unfortunately you can't get from where you are to where you need to be without some risk (after all, you don't know which bits of your current strategy are causing problems).
    You may have to progress by letting mistakes happen - in other words, when some very bad plans show up, work out WHY they were bad (missing histogram, excess histogram, out of date high values) to work out the minimum necessary fix. Put a defensive measure in place (add it to the table of special cases) and run with it.
    As a direction to aim at - I avoid histograms unless really necessary, I like introducing function-based indexes where possible, and I'm perfectly happy to write small programs to fix columns stats (low/high/distinct) or index stats (clustering_factor/blevel/distinct_keys) and create static histograms.
    Remember that Oracle saves old statistics when you create new ones, so any new stats that cause problems can be reversed out very promptly.
    Regards
    Jonathan Lewis

  • "Key not valid for use in specified state" after IIS Reset?

    I have had a ton of issues with the System.Security.Cryptography.CryptographicException: Key not valid for use in
    specified state. error. This seems to only occur now when IIS is reset and I try to resume my browsing session. So I am logged into the application, I reset IIS on the server, refresh the page and see the error.
    I am building an application in .NET 4.0 MVC with a Secure Token Service that is using WIF 4.0. Everything works as expected, except this case. I even tried to use a custom error page, but the error is happening there as well. Because of that, I can't get the
    custom page to show either. One thing I noticed is that if I switch my IIS APP Pool user back to Network Service account it doesn't throw the error any more. We have some restrictions (mostly network related) in the application that we need to use an account
    in our AD for the app pool sections
    Anybody have any experience with this issue?

    Hi Shennessy,
    In my opinion, this thread is related to ASP.NET forum. So please post thread on that forum for more effective response. Thank you for understanding. Please refer to the following link.
    http://forums.asp.net/.
    Regards,
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Cannot export private key: "key not valid for use in specified state"

    Hi,
    This is a bit of a long story but I hope someone can give us some guidance.
    We use authentication certificates issued from our own Enterprise CA to control user and machine authentication via RADIUS/NPS for our wireless network.  Certificates are deployed via group policy/autoenrollment. In general this works well but
    we have an intermittent problem where user authentication stops working for a user who was fine before. The user certificate looks OK via Certmgr (shows as valid, shows that there is a private key associated with the certificate).  The NPS server
    logs show that the machine has been authenticated and granted access, but the user in this situation doesn't show up in the server logs at all. 
    The only solution in this case is to connect to the wired network and request a new certificate for the user (either via certmgr or just by deleting the duff cert and logging off/on again to get the cert via autoenrollment).
    The interesting thing is that while a "working" certificate can be exported with no problem, a duff certificate cannot be exported with its private key, giving the error "key not valid for use in specified state". (Obviously the certificates
    come from the same template, and the key is not marked unexportable).  The key files are present in %userprofile%\Appdata\Roaming\Microsoft\Crypto\RSA and the user permissions on these files look correct.
    After much searching of the forums I tried running certutil-repairstore on the duff certificate and that also returned the same error.  I also tried an undocumented switch Certutil -user -key -v and again, got a very similar error "Loadkeys returned
    key not valid for use in specified state. 0x8009000b (-2146893813)".
    I'm assuming that the fact that the key is unexportable/corrupt is also the reason why the certificate can no longer be used for authentication.
    Does anyone have any clues as to what might be causing this, and/or if a certificate with a key in this state can be repaired?
    Thanks!

    I can just share an experience I once had that was somewhat similar:
    In this case certificates could sometimes not be enrolled and the CSP came up with a related error message.
    The root was the software / driver (?) for a hardware dongle required to run some software. This "driver" added a registry key to the list of CSPs (under these HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Cryptography\Defaults\Provider - but I have seen this with
    XP, so the exact location might be different now).
    This fake CSP entry that had quite a weird name effectively broke other CSPs. After removing the access to / generation of keys worked fine.
    So it would be interesting to know if you run some software that is "close to CSPs or cryptography".
    Elke

  • Possible to make an item editable for certain users and read only for other

    Is it possible to make an item editable for certain users and read only for others?
    I've been able to accomplish this by taken the select statement that I used to define an authorization scheme, placing it in the Read Only condition of the item. However, I would like to simply reference the authorization scheme to utilize caching, and to help keep things cleaner for future maintenance.
    Is it possible to reference an authorization scheme in an item condition similar to the way another item can be referenced by preceding it with a colon (i.e. :P1_First_Name)?

    Thank you, your suggestion worked.
    It would be nice on a future release of APEX if a drop-down box existed under the Ready Only section that would allow an existing Authorization Scheme to be selected or negated when applying the Read Only attribute to a form item.

  • Databases in Recovery Pending state

    Dear all,
    Need your expertise to understand bit more. 
    Yesterday i came across databases in "Recovery Pending" state problem.
    Followed too many recommendations form Google but to no avail.
    e.g. http://dbamohsin.wordpress.com/2012/01/23/cannot-detach-a-suspect-or-recovery-pending-database/
    Plucked some leafs from Paul Randal's articles as well http://www.sqlskills.com/blogs/paul/post/Search-Engine-QA-4-Using-EMERGENCY-mode-to-access-a-RECOVERY-PENDING-or-SUSPECT-database.aspx
    But none of them could fixed my problem. 
    What i did:
    I installed 2008 r2 on a machine (DEV) which already had 2008; yea instance name was different.
    Had to restart the server so i did (yea, i gave all developers enough time but due to some reasons databases were in recovery pending mode).
    After restart, some databases (not all) were in recovery pending state. Above links provided enough stuff to recover them but didnt work. For instance after setting database in Emergency mode, it stayed there for ages telling something like "Database is
    being recovered, wait until recovery is finished".
    How i fixed it:
    I attached them onto the newly created 2008 r2 instance. Made them offline (on old instance) in case there is any access conflict. Didnt get any error, like files are in use or something,  while attaching them (which raised an eyebrow).
    Couple of things i noticed after new instance's installation:
    While trying to reattach database on same old instance, I wasn't able to see the .mdf and .ldf files form there respective drives. But i was able to see from the new instance.
    WHY?
    Both sql server services (old and new) were using same service account. When i gave service account enough ntfs permissions on the drive i was able to see all .mdf and .ldf files.
    Questions are:
    1) Why it wasnt the case before? Important to note here: service accounts were local admins on the server.
    (I think i am answering myself here...would love to get more insight on it).
    2) Is it normal that i just have 2008r2 tools in Start>>Program. e.g. there is just 2008 r2 management studio not 2008 one etc.
    Dont want to let this happen in production environment (i will have backups though), so is there any approach you guys are using to avoid it.
    I hope i made myself clear here, or let me know.
    Thanks in advance for any help.
    Dinkar Chalotra

    If you dont need your log file, you can:
    Detach your database
    Move the log file to other place
    Attach back the database and from the file list, remove the ldf
    SQL server will create a new log file and the database will be online again.
    Escarcha

Maybe you are looking for