PCAP configuration for Zynq SEM IP core

Hi,
I am trying to get sem ip to work on zynq.
I am following the instructions : http://forums.xilinx.com/t5/Zynq-All-Programmable-SoC/SEM-IP-Zynq-Devices/td-p/590798
Copying it here :
1) Read carefully the SEM IP manual (page 53-55)
2) Create a block diagram in Vivado (Zynq + AXI GPIO)
3) Create a wrapper and instance the SEM IP
4) Export the design to SDK
5) Create a code for the processor that clears the PCAP_PR bit 27 and then enables the GPIO connected to the icap_grant signal
6) Program the FPGA, open Putty and run the code
I have a couple of questions.
1. Once I create the wrapper, I need to instantiate the sem ip in the wrapper.v? (the wrapper will be the top module?)
2. How can I clear the PCAP bit? I have found that PCAP_PR (bit 27) is in the PS device configuration control register (DEVCFG CTRL, address 0xF8007000). Do I need to write a c code to clear this pin? Also, where to connect the GPIO to the icap_grant? So that I can make it '1' when required?
Thanks.

Thank you for the reply.
I instantiated the sem core correctly.
Now, when I try to run the code, I get the following output on serial (configured to 115200)
O> 02OK
I am running the following code :
#include <stdio.h>
#include <xil_printf.h>
#include <xil_types.h>
#include "platform.h"
#include "xil_io.h"
int main()
    volatile unsigned int ctrl;
    volatile unsigned int reset;
    init_platform();
    xil_printf("Hello World\n\r");
    reset = Xil_In32(0xF8000240);
    xil_printf("RESETS: %08x\n\r",reset);
    Xil_Out32(0xF8000008,0x0000DF0DU);
    Xil_Out32(0xF8000240,0x0000000EU);  // to ensure icap_grant is 0
    ctrl = Xil_In32(0xF8007000);
    xil_printf("PCAP DEVCFG CTRL: %08x\n\r",ctrl);
    Xil_Out32(0xF8007000,(ctrl&(~0x08000000)));
    ctrl = Xil_In32(0xF8007000);
    xil_printf("ICAP DEVCFG CTRL: %08x\n\r",ctrl);
    xil_printf("ICAP granted\n\r");
    Xil_Out32(0xF8000240,0x0000000CU);  // icap_grant to 1
    reset = Xil_In32(0xF8000240);
    xil_printf("RESETS: %08x\n\r",reset);
    return 0;
(My SEM IP is configured to 100Mhz. ) I believe my UART (via Pmod USB UART) is configured properly since I get
ICAP M_V4_1
when I program the FPGA and not run the sem code.
How can I solve this error?
 

Similar Messages

  • Stock RAM Configuration for 15" MacBook Pro 2.4GHz Intel Core 2 Duo

    Hello,
    I have a new 15" MacBook Pro 2.4GHz Intel Core 2 Duo on it's way to me and I need to order a RAM upgrade. I know the specifications of what I need, but am getting conflicting information from the retailer on how the stock models are configured for RAM. The total amount is 2GHz. Does anyone know if this amount is provided by one or two RAM modules?
    "Technological change is like an axe in the hands of a pathological criminal.” (Albert Einstein, 1941),
    Dr. Z.
    Message was edited by: Dr. Z.
    Message was edited by: Dr. Z.

    You'll find the RAM is in a 2 x 1GB configuration. Unfortunately they both need to come out for you to upgrade to 4GB RAM in a 2 x 2GB configuration. Still cheaper than going with Apple which makes it a little softer.

  • No JobServers are configured for ISJobServerGroup. (COR-10715)

    hello,
    I have problem when do data profiling in Information steward:
    No JobServers are configured for ISJobServerGroup. (COR-10715)
    com.bobj.mm.sdk.SDKException: No JobServers are configured for ISJobServerGroup. (COR-10715)
    i search in the forums, marketplace, about  similiar error, and tried all the suggestion, such as
    create Job server use "Data Service manager" but still didn't work.
    for information I install BO Platform 4.0, data Service 4.0 SP1 and then Information steward 4.0, with database sql server 2008
    any help will be appreciate...
    Thank you.

    Hi laksh thank you for your replay,
    laksh89 wrote:
    hi,
    you have to configure your server and then have to associate with the IS repository.
    what i've done were:
    Create repository central, profiler and local use Data Service Repository Manager
    Configure repository in BO Central Management Console, there are Central, Profiler and Local
    Create Group for Repository Central and assign a User in Data service management Console
    With Data Service Server manager i've created Job Server local and Job server Profiler
    Use Data Service Designer i've add Central repository and Activate it.
    laksh89 wrote:
    > set the environment variable first in command line and  in service manager run
    > $ cd $LINK_DIR/bin/
    about this i don't know how to run service manager, when i go to $LINK_DIR/bin/   use dos command, i didn't found service manager application, i found al_jobserver, al_jobservice etc.
    laksh89 wrote:
    > $ . ./al_env.sh
    > $ ./svrcfg
    above objects i didn't found also.
    laksh89 wrote:
    > enter 3 : configure  server
    > enter c: add server , then give name and specify the port of the server
    > enter a: add to the repository connection, here you have to specify the connection info
    >
    > once evrything is done press q  and then x to exit service manager.
    >
    > Also when you have installed Data services, plz make sure you have selected job server under server component in "select feature" option and MDS and VDS is chosen during the DS installation as it is unchecked by default
    when install data service i already select feature Job server but i didn't found MDS and VDS during installation.
    I'm sorry it's quite new for me..
    Edited by: Martinus Hendriyanto on Dec 4, 2011 2:16 PM

  • Dynamic configuration for the application

    I'm a newbie and trying to evaluate JSF. I took part in a project with struts framework and i could say i don't like struts cause of some limitation and i suspect that JSF has the same limitation. First of all, there are JSP pages. I can't use JSP pages/templates stored somewhere else except web application. I'd like to store my templates in the database. This will give me ability to add/change/remove jsp page without redeploying war-archive. Yes, i can use exploded war-archive and add JSP pages straight into file system, but i don't like this way. Next problem is the configuration file like struts-config (faces-config, whatever). I have to write down all my navigation logic and beans mapping to this file and it's really annoying me. This kind of project requires dynamic configuration for the all web application and i'd like to add/change/remove pages, beans, actions without restarting/reconfiguring webapp. Today, I looked thru documentation and source code and found that FactoryFinder class could use my own classes but i'm not sure it will be enough for the dynamic configuration for the beans. In any case, it's a huge problem with the templates.
    Right now i'm thinking about 1(one) JSP with XML/XSLT rendered HTML content and JSF for event/action handling. I mean, i'll use XSLT to render dynamic content and JSF for dispatching events. In that case i have to render form tag names in the HTML as JSF engine (i'm not sure if it possible). If it will take a lot of time to do this i'll have to switch a home-grown framework.
    So, i really need advices how to implement this sort of dynamic behavior in the web application.
    regars,
    anton

    A lot of what you're seeing are genuine limitations, but happily, JSF is so pluggable that you can overcome them.
    JSPs are problematic in exactly the way you describe - unless your app server has support for pulling JSPs out of a database, etc., you're SOL. JSF, however, lets you use something other than JSPs by replacing the ViewHandler. That's a fair bit of work, but it is doable.
    For navigation logic, you'd replace the NavigationHandler. As long as you're willing to write the code that can pull navigation rules from an external source, life is good.
    For managed beans, all you need to replace is the VariableResolver - again, if you can pull the rules from an external source, you have full control.
    This is all far from easy - and it's a huge amount of work for one person - but I'd imagine these pieces will become available from various sources. A core goal of JSF 1.0 was making the framework as a whole pluggable so that others can innovate on top of the framework.
    -- Adam Winer (EG member)

  • Switch configuration urgent help (edge and core)

    hi
    i have new project in with the bellow product :
    20 X WS-C2960-24TC-S
    2 X WS-C3750X-48T-S
    2 X WS-C2960S-24TS-S
    i need to configure this switch in order to work without having vlan, first the 2 core switch for redundancy, then each catalyst switch 2960(edge Switch) connected to the two core with 2 uplink each uplink will be connected to single core switch(i have 2 core switch and i want to configure it in stack mode redundancy) 
    i need help to configure this switch to work perfectly with each other in best redundancy mode any configuration for this switch will be very helpfull for me
    thank you

    Hey,
    This is a very open question but i believe the document below is a good point to start:
    http://www.cisco.com/c/dam/en/us/td/docs/solutions/Enterprise/Small_Enterprise_Design_Profile/chap2sba.pdf
    HTH.
    Regards,
    RS.

  • Best RAM configuration for Macbook (mid 2007)

    Hi, I have a Macbook (mid 2007) with 1 GB ram and I would like to upgrade. Currently I have 2x512MB, so both of my memory slots are taken. Is it a good idea to have one 512MB card and a 2GB card in the other slot? Otherwise, what is the best RAM configuration for a mid 2007 macbook?
    I am also looking into purchasing this memory card for my macbook, is this a good choice?
    http://www.amazon.com/Crucial-2x1GB-200-Pin-667Mhz-SODIMM/dp/B000FQ2JLW/ref=sr_1 _3?ie=UTF8&qid=1307335802&sr=8-3
    Any advice would be very much appreciated!

    Kappy wrote:
    In  these early models, Ronda, the benefit is very marginal.
    Yes, I read the figure 128 in regards to the benefit. Something about interleaving, maybe?
    In doing a Google search for that article I saw, I found this:
    http://www.mac-forums.com/forums/apple-notebooks/99215-4gb-ram-my-macbook.html
    the maximum amount of RAM your laptop can take is 3.3GB. So while you can physically insert 4GB of RAM, OS X will only be able to see 3.3GB of it.
    But I don't know how knowledgeable that poster is who says it can utilize 3.3 GB. I do know that the full 4 GB shows up in the System Profiler on my MacBook Core 2 Duo 2.16 GHz.

  • 12c agent install fails with error - The plug-in configuration for the oracle.sysman.oh monitoring plug-in may have failed

    Hi,
    I am trying to install 12c agent on windows 7 64 bit server by following m.note   .
    It is failing with Agent Configuration failed, please see below error message.
    INFO: length of temp is2
    INFO: Return value:C:\/Oracle/12.1.0.3.0_AgentCore_233/core
    INFO: ** Agent Port Check completed successfully.**
    INFO: ERROR: The Management Agent configuration failed. The plug-in configuration for the oracle.sysman.oh monitoring plug-in may have failed, or this plug-in may not be present in the Management Agent software. Ensure that the Management Agent software has the oracle.sysman.oh monitoring plug-in, if not then retry the operation. If the agent software has the oracle.sysman.oh monitoring plug-in, view the plug-in deployment log C:\Oracle\12.1.0.3.0_AgentCore_233\core\install\logs to check if the plug-in configuration for the oracle.sysman.oh monitoring plug-in failed.
    INFO:
    INFO: perform - mode finished for action: configure
    INFO:
    INFO: You can see the log file: C:\Oracle\12.1.0.3.0_AgentCore_233\core\12.1.0.3.0\cfgtoollogs\oui\configActions2014-10-03_08-48-15-AM.log
    INFO:
    INFO: C:\Oracle\12.1.0.3.0_AgentCore_233>exit /b 3
    INFO: Plugin homes:
    INFO: Plugin homes:
    INFO: C:\Oracle\12.1.0.3.0_AgentCore_233\core\12.1.0.3.0\oui\bin\runConfig.bat ORACLE_HOME=C:\Oracle\12.1.0.3.0_AgentCore_233\core\12.1.0.3.0 RESPONSE_FILE=C:\Oracle\12.1.0.3.0_AgentCore_233\core\12.1.0.3.0\agent.rsp ACTION=configure MODE=perform COMPONENT_XML={oracle.sysman.top.agent.11_1_0_1_0.xml} RERUN=true completed with status=3
    SEVERE: ERROR: Agent Configuration Failed
    Thanks,

    Looks like , J2EE is having a problem connecting to the DB.Pls. chk. the following
    -If you have configured the loop-back adapter , if installing on a local system & updated the /etc/host file with your ip address and host name.
    or if the system is part of some n/w group and has ip address assigned ?
    -Jdk 1.4 version is installed in your system
    -Enough free Disk space available in the system
    -How much RAM your system has which windows version you are using  ?
    -Is the DB is coming up properly ? its error related to this. check the logs.
    -If the ports used by sap install / run used by other services running on the system ? & user have admin. privileges on the system .
    Pls. update with your findings on above same , for the step next ...
    Thanks ,
    Uppal

  • Best Mac Pro (2013) configuration for photo editing/processing?

    Hi all,
    I couldn't find a reliable answer to this in my searching here or on google, hence I'm posting it here.
    I'm going to buy and upgrade to the new mac pro when it's announced this month (Dec 2013).  My primary use will be photo processing in photoshop.
    Configuring it with 64gb RAM is the no-brainer part. And probably a 512gb or 1TB flash drive too.
    The bit I'm unsure about is whether to opt for the 6 core processor option over the quad core?  For photo editing (adding layers, filters, brushing in, multiple files open at times, running batch edits etc), does anyone have an opinion on whether the performance increase (if there is in fact any increase for photo work?) of the 6 core 3.5ghz would justify paying the extra AU$1300 difference over the quad core 3.7ghz option?
    And from my earlier research paying the huge prices for 8 or 12 cores would simply be a waste for photo processing.
    Thanks for the advice...

    Mozzzaaa
    I have the exact same requirements, here are my findings based on some observations from Activity Monitor and research based on how the hardware works.
    Photoshop does not utilize multiple cores well for many standard editing ativities - therefore one core will be busy while the rest remain idle, however I have noticed over time that upgrades to Photoshop seem to take more advantage of multiple cores as Adobe updates the code. For example, appling filters utilize all of the cores while the filters are computing changes (smart sharpen for example).  Try running CPU monitoring in Activity Monitor (double click the CPU graph to display all cores).
    Lightroom utilizes all of the cores for Import, export and other activities that process multiple files.  Being more modern code, it beter utilizes muti cores.
    Keep in mind that each core handles two code threads, therefore a four core system is capable of processing 8 "streams" of code, the 6 core can manage 12 threads, etc.  
    Here is a screen shot of Mac Book Pro running PS CC Smart Sharpen:
    All the new Mac Pro run at 3.9Hz Turbo Boost - they are all the same in that respect.  This means that when the processers are not hot, at least one core will run at 3.9Hz - therefore on a relativly idle machine (just editing in PS for example) you would likley be running at 3.9Hz on all the Mac Pro 2013.
    There are also the GPUs to consider.  Apple as usual has not made enough information available to easly determine the cost benefits of the more powerful GPUs and I don't know if PS would utiliize the AMD GPUs well now,  or perhaps better utilize them for the future.  Perhaps someone could comment on that.  Here is an interesting article: http://architosh.com/2013/10/the-mac-pro-so-whats-a-d300-d500-and-d700-anyway-we -have-answers/
    Clearly the D500 that is standard with the 6 core seems a major bump over the 4 core D300 (therefore the costs of the 6 core reflect that).  I don't know how much the D700 would cost - it would be helpful if this were published so I could consider my order.
    There are two GPU in the new Mac Pros - but the purpose of the second one is not toally clear (thanks again to Apples's communication).  It likley will be utilized for all sorts of things that don't really exist now and FCP X is scheduled for a new release better utilize the GPU for video (as nwaphoto mentioned video processing will be a major use of this equipment).
    I was interested in your comment regarding 64 Meg ram.  Yes that would be a hudge boost to PS performance, but would it be better to purchase from Apple or wait for OWC who offer RAM at major discounts over Apple.  Once again, no info yet that I am aware of.
    I believe the flash drive is upgradable but rumor has it that it uses a proprietary connecter. Makes me want to go with the largest size but once again OWC might be the way to go for an upgrade in a year or two.
    In the past, the 6 core 2012 Mac Pro's were somewhat of a sweat spot in terms of horsepower vs cost.  I will be considering that in my decession to upgrade. So I am considering a 6 core,  will check out the Ram and Flash diIsk based on price - which is the infor I don't have.  If you have anything please post
    Thanks

  • Creating a production order for a semi finished product

    Hi
    I am having a multi level bom scenario over here.
    one of the bom components of the header material is a semi finished product.
    I am using the planing stratagy 40 for that material.
    in standard sap when i create a production order for  header material. the requirement gets transerd to all the components. if the components are not available then only  a requirement is placed. when i run MRP that requirement gets converted in to a planned order. which converts in to a purchase req or a production order,
    well for my semi finished product. its a Work in process item. there wont be any stock for that item in the storage location any time.
    what my requirement is. when i create a production order for my finished product it should automatically create a production order for the semi finished product.
    is this possible with some configuration settings.

    Dear Deepu,
    for ur requirement u need to maintain MRP type PD for FERT material and also for ur HALB.
    If u run  MRP  only, it will create planned order for FERT and for all its dependent requirement(for PD).
    If u create order manually for FERT material then it will not create order for HALB material, u need to create manualy for this also.
    sree

  • OER Atrifacts Store Setup and Configuration for CVS.

    Hello,
    My question is related to proper configuration of a CVS based Artifact Store in Oracle Enterprise Repository.
    I've attempted to configure a CVS Artifact Store from within OER's Asset Editor (as described on page 27 of the OER Configuration Guide & pages 92-93 of the OER Admin Guide.) I have also ensured that this new Artifact Store is selected in the dropdown for the Submission Upload Artifact Store system setting on the OER Admin page. However, my configuration settings for the Store appear to be incorrect and I haven't found a CVS example that has been thorough enough to infer the proper settings.
    So I'm hoping someone can assist me who has been through configuring a CVS Artifact Store for OER. I'll try to provide detailed information below with the hope that it may be of assistance.
    First, analogous CVS settings that are configured for my standard CVS plug-in in Oracle Workshop. These settings are for the pserver protocol, but I think they will provide some value to someone who has experience in configuring a CVS Artifact Store.
    The standard Eclipse CVS plug-in settings for our enterprise repository location:
    Connection Type: pserver
    User: sampleuser
    Password: Pa55wd
    Host: dev003
    Repository Path: /cvs/Integration
    This translates to repository location --> :pserver:sampleuser:Pa55wd@dev003:/cvs/Integration
    (Which is the root of our enterprise CVS repository)
    Now…within this repository location above there is a module (Development/OER-POC) that is located in:
    /cvs/Integration/Development/OER-POC
    …and checked out into a project called "Sandbox" located in the default workspace in Oracle Workshop.
    Additionally, within the organization we also have HTTP access to CVS. This previous example XSD I just mentioned has an HTTP URI of:
    http://dev003:8080/viewcvs/viewcvs.cgi/Development/OER-POC/src/schemas/ExtOfAddrRef/v1/ExtOfAddrRef.xsd?cvsroot=Integration
    Now as I have attempted to properly set up the configuration for the OER Artifact Store I have "translated" the above information into the following entries on the Artifact Store setup screen:
    Name: CVS Enterprise Store
    Type: Raw SCM
    Hostname: dev003
    SCM Location: Integration (??? Not sure if this has been inferred correctly. If not what should be specified here.)
    SCM Type: CVS
    Download Path URI Suffix: cvsroot=Integration (??? Not sure if this correct based in previous information?)
    Download Path URI: (??? Not sure what should be specified here. I have inferred several logical options but they have not worked.)
    Finally, when I referenced page 62 of the ORE Core Registrars Guide PDF the "Additional Development documentation" link (http://devwiki.flashline.com/index.php/B02831) states:
    • "All files from an SCM will be URL addressable. The SCM (or a third party) must provide a way to get a particular file based on a URL. In other words, we are not going to use any client libraries to write code that will retrieve us a file from an SCM. "
    • "Added concept of a 'download path' to an artifact store. For example, consider our development environment. Eclipse will have SCM information (i.e. cvs.flashline.com), eclipse/cvs project information (i.e. projects/framework/modules/com.flashline.geneva.rbac), and file/cvs file information (i.e. /code/com/flashline/geneva/rbac/base/RoleContextPersistBroker.java?rev=1.66). Using this info, a fileinfo's uri can be set. The artifact store will then allow us to specify a download base path such as http://cvs.flashline.com/viewcvs/viewcvs.cgi/."
    To conclude my questions are:
    1) Based on the comments in the Registrar's Guide it seems clear that the intent of an Artifact Store is purely for the support of downloading the physical artifact that corresponds to an OER asset. I would conclude that "Raw SCM" based Artifact Stores do not intend to support direct check-ins for the various SCM systems. (rather assets/artifacts in Eclipse would be manually checked in from within the IDE environment). If someone could confirm whether this is correct that would be much appreciated.
    2) Based on the information I supplied for the example enterprise CVS repository...what would the appropriate settings be for these fields on the Artifact Store setup screen:
    a) SCM Location
    b) Download Path URI Suffix
    c) Download Path URI
    3) Since the "CVS" SCM Type does NOT specify fields for username and password (unlike when you select other potential SCM Types in the Store setup screen); how should one handle credentials in CVS repositories?
    Thanks in advance to any assistance.
    ~Todd

    Hello user642477,
    I'm facing the same problem.
    It seems to me that the Oracles's guide line don't give enough information. I'll try to fixed it and whether I'm able to do so far I'll be in touch...
    By the way, how could you browse to the link: http://devwiki.flashline.com/index.php/B02831? When I try it so a page cannot be displayed message is displayed.
    Regards
    felipe

  • Client context error message while configuring for social login and personalization

    Hi,
    I am getting the below exception while configuring for social login and personalization.
    27.12.2012 11:21:25.463 *ERROR* [127.0.0.1 [1356587485463] GET /etc/cloudservices/facebookconnect/sample_fb.login.html/callback/connect HTTP/1.1] com.day.cq.wcm.core.impl.designer.DesignerImpl No design at /etc/design/cloudservices. Using default.
    27.12.2012 11:21:46.549 *ERROR* [127.0.0.1 [1356587485463] GET /etc/cloudservices/facebookconnect/sample_fb.login.html/callback/connect HTTP/1.1] com.adobe.granite.auth.oauth.impl.oauth2.Oauth2Helper Problems while creating connection.
    27.12.2012 11:21:46.549 *WARN* [127.0.0.1 [1356587485463] GET /etc/cloudservices/facebookconnect/sample_fb.login.html/callback/connect HTTP/1.1] com.adobe.granite.auth.oauth.impl.oauth2.Oauth2Helper token was null or not in UNAUTHORIZED state:1
    27.12.2012 11:21:46.549 *ERROR* [127.0.0.1 [1356587485463] GET /etc/cloudservices/facebookconnect/sample_fb.login.html/callback/connect HTTP/1.1] com.adobe.granite.auth.oauth.impl.servlet.OAuthProfileImportServlet requestAccessToken: could not retrieve user
    27.12.2012 11:21:46.549 *ERROR* [127.0.0.1 [1356587506549] GET /etc/cloudservices/facebookconnect/sample_fb.login.html HTTP/1.1] com.day.cq.wcm.core.impl.designer.DesignerImpl No design at /etc/design/cloudservices. Using default.
    27.12.2012 11:21:48.455 *ERROR* [127.0.0.1 [1356587508455] GET /etc/clientcontext/default/contextstores/profiledata/loader.json HTTP/1.1] org.apache.sling.engine.impl.SlingRequestProcessorImpl service: Uncaught SlingException org.apache.sling.api.SlingException: An exception occurred processing JSP page /libs/cq/personalization/components/profileloader/command/load/load.json.jsp at line 41
    at org.apache.sling.scripting.jsp.jasper.servlet.JspServletWrapper.handleJspExceptionInterna l(JspServletWrapper.java:574)
    at org.apache.sling.scripting.jsp.jasper.servlet.JspServletWrapper.handleJspException(JspSer vletWrapper.java:499)
    at org.apache.sling.scripting.jsp.jasper.servlet.JspServletWrapper.service(JspServletWrapper .java:451)
    at org.apache.sling.scripting.jsp.JspServletWrapperAdapter.service(JspServletWrapperAdapter. java:59)
    Thanks,
    Shankar .A

    Hi Shankar,
    Any luck with this issue. I am also seeing the same issue
    Thanks
    Pushparajan

  • No servers configured for repository?

    Hi, I just recently reinstalled arch, and I don't even have a DE or WM installed yet. I was just trying to install wicd to get my wifi to work, when I got this:
    :: Retrieving packages from core...
    warning; failed to retrieve some files from core
    error: failed to commit transaction (no servers configured for repository)
    Errors occured, no packages were upgraded.
    I am new to arch, so it is possible I made a mistake when installing. The computer is plugged into Ethernet, and I just recently updated arch as I installed it from an older CD (about 1 year old?)
    I am logged in as root, and I did search the web and these forums for this issue and I didn't find it.
    Thank you for your time.

    My mirrors were commented, but I fixed that. Now when I do pacman -S alsa-utils it goes through all my mirrors and says : "no address record" Ending with:
    failed to retrieve some files from extra
    error: failed to commit transaction (no address record)
    Errors occurred, no packages were upgraded.
    I must have messed something up big time.

  • AM calls to LDAP No plugins configured for this operation

    Hi All,
    I am getting the following error when creating a user using AM SDK calls. Can someone shed some light here.
    We are using SUN JES 2005Q4, AM 7.0 Patch 5.
    Thanks
    Bala
    [#|2007-11-02T11:12:09.615-0500|WARNING|sun-appserver-ee8.1_02|javax.enterprise.system.stream.err|_ThreadID=13;|
    Message:No plugins configured for this operation
    at com.sun.identity.idm.server.IdServicesImpl.create(IdServicesImpl.java:177)
    at com.sun.identity.idm.AMIdentityRepository.createIdentity(AMIdentityRepository.java:246)
    at gov.research.core.eauth.action.SSOUtilities.createUser(SSOUtilities.java:197)
    at gov.research.core.eauth.action.SAMLClientNSFAction.execute(SAMLClientNSFAction.java:99)
    at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:484)
    at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:274)
    at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1482)
    at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:507)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:747)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:860)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:585)
    at org.apache.catalina.security.SecurityUtil$1.run(SecurityUtil.java:249)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAsPrivileged(Subject.java:517)
    at org.apache.catalina.security.SecurityUtil.execute(SecurityUtil.java:282)
    at org.apache.catalina.security.SecurityUtil.doAsPrivilege(SecurityUtil.java:165)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:257)
    at org.apache.catalina.core.ApplicationFilterChain.access$000(ApplicationFilterChain.java:55)
    at org.apache.catalina.core.ApplicationFilterChain$1.run(ApplicationFilterChain.java:161)
    at java.security.AccessController.doPrivileged(Native Method)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:157)
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:263)
    at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:551)
    at org.apache.catalina.core.StandardContextValve.invokeInternal(StandardContextValve.java:225)
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:173)
    at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:551)
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:161)
    at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:551)
    at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:933)
    at com.sun.enterprise.web.connector.httpservice.HttpServiceProcessor.process(HttpServiceProcessor.java:226)
    at com.sun.enterprise.web.HttpServiceWebContainer.service(HttpServiceWebContainer.java:2071)
    |#]

    yes i am using JSF 1.2 version.
    i have included listener tag in web.xml.But still i
    am getting same error.
    Sorry i didnt get couple of things that u explained
    about myfaces.
    could u explain more eloberately?JBoss comes with MyFaces as it's JSF implementation. If you wish to use JSF 1.2, you need to uninstall MyFaces and install Sun's RI of JSF (or another one if you prefer). It is not hard to do, see the JBoss documentation:
    http://wiki.jboss.org/wiki/Wiki.jsp?page=JBossWithIntegratedMyFaces

  • Missing configuration for an EAR on the server

    Hi all,
    I'm new to NetWeaver and using version 2.0.11 of the Developer Studio to develop web services and clients to access them.
    In order get familiar with the Developer Studio and the deployment procedure, I coded a simple JSP page which does nothing else than printing "hello world!" in the browser and deployed it successfully on the server through a EAR archive.
    My problem is that when I call the JSP page I get the following exception in the log viewer:
    An error occured while copying configurations for application sap.com/BPEWS_CLIENT_EAR. Reason: com.sap.engine.frame.core.configuration.NameNotFoundException A configuration with the path "webservices/services/sap.com/BPEWS_CLIENT_EAR" does not exist.
    I used the Visual Administrator on the server to confirm that there is effectively no such configuration under the "Configuration Adapter" tree.
    Surprisingly, the JSP seems to be working anyway, as I see the "hello world!" output in my browser, but I suspect that this error will cause me problems later.
    I searched the Web and SAP forums to try to find a solution to this problem, without success. I would very much appreciate any help about it, as I have been scratching my head quite roughly for some hours now. How can I create the required configuration for my client project? Will this error cause me problems in the future or is it more like an information message about an optional component (the EAR configuration)?
    Thanks in advance.
    Martin Cloutier
    Trisotech inc.
    Montréal (Québec)

    Definitely option 2. You don't need a new installation...
    Hope this helps
    Rob
    "Steve Schroeder" <> wrote in message news:3b741b95$[email protected]..
    What is the recommendation for running multiple instances of WL6 on thesame
    server?
    I figure there are several ways to approach this:
    1) Create an entirely new installation with a new BEA_HOME directory
    2) Create an additional domain on the same server that has its own set of
    startup scripts that reference a different port.
    Any thoughts on the best approach?
    Steve

  • Graphics card selection for Premiere Pro  - CUDA core vs Memory bit

    Hi,
    I currently have the following configuration.
    Intel i5 Quad Core.
    8 GB DDR3 RAM
    120 GB SSD + 2 TB HDD
    I have plans for upgrading quadcore to i7 six core processor soon.
    I don't have a graphic card now. My work will be editing mostly 720P footages and sometimes 1080p. I may also color grade but not all the time.
    I had selected GTX 660 Ti at first. But the retailer told me it's outdated. They suggested me GTX 760 and GTX 960 (of Zotac). I liked GTX770
    I am confused regarding the CUDA cores and memory bit. Which one is important for determining the performance of the video card in editing. Any help is much appreciated.
    I have a tight budget so please don't suggest me costly cards.
    Thanks,
    SB

    You want the highest bit width.  GPUs I have noticed from going from GTX 650 to GTX 970 in premiere CC 2014 is its only going to help with effects and transitions that use GPU assistance and timeline playback.  You'll see GPU assist with rendering if you have effects and transitions that use GPU otherwise it'll sit idle.  Now what I have also noticed is during playback I would have some hiccups with the 650 where the 970 just played right through them, so I think its how the program uses the memory is important so if you have a wider bus move data can get through faster and also the more bandwidth the faster the card will be so the 760 and the 770 will be much fast cards.  The 960 would be better power efficient but personally I recommend the 970 over them all if not then those higher end 7xx. 960 will get you by but  760 and 770 will get you by faster.

Maybe you are looking for