Unique ID Attribute

I've include the a unique ID attribute:
(Name: element_ID Unique ID Required Control flags: Read-only)
and updated an xml instance. How do I get Frame to auto populate the attribute with a unique ID?

Other than for cross-referencing purposes, Frame doesn't have a mechanism for adding a unique ID to attributes. There are plugins that provide this functionality, but for default FM, you have to enter the ID manually.
...scott

Similar Messages

  • IDispatch error #19876 - LDAP Authentication Source - User Unique Name Attribute

    Hi,
    we have troubles with the User Unique Name Attribute:
    As 'cn' and 'dn' may change we want to use the EmployeeID ('workforceID') as unique identifier for our user synchronisation. This attribute exists and is also imported in the profile service. But when we add 'workforceID' to the 'User Unique Name Attribute' in the LDAP Settings of the Remote Authentication Source (LDAP AWS) the job fails and throws the error at the end of this message in the history log.
    When we remove 'workforceID' everything works fine. If we set the user unique name attribute to 'cn' or 'dn' everything works fine, too. If we enter not existing names the same error is thrown. It seems like 'workforceID' could not be read/found? What are we doing wrong? Thanks in advance.
    1/17/06 12:37:01- (34432) CPTSyncAgent::ProcessUsers: Call to retrieve the users on this auth source failed. Please check that the authentication source server is online.
    *** COM exception was: IDispatch error #19876 (0x80044fa4): [SOAP fault: faultcode='ns1:Server.userException' faultstring='java.rmi.RemoteException: Unknown error occured in internalGetUsers null
    com.plumtree.remote.ServiceException: Unknown error occured in internalGetUsers nullat com.plumtree.ldap.aws.LDAPSyncProvider.internalGetUsers(LDAPSyncProvider.java:671)at com.plumtree.ldap.aws.LDAPSyncProvider.getUsers(LDAPSyncProvider.java:504)at com.plumtree.remote.auth.NativeSyncProvider.GetUsers(Unknown Source)at com.plumtree.remote.auth.xp.XPSyncProvider.GetUsers(Unknown Source)at com.plumtree.remote.auth.soap.SyncProviderSoapBindingImpl.GetUsers(Unknown Source)at com.plumtree.remote.auth.soap.SyncProviderSoapBindingSkeleton.GetUsers(Unknown Source)at sun.reflect.GeneratedMethodAccessor1024.invoke(Unknown Source)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)at java.lang.reflect.Method.invoke(Method.java:324)at org.apache.axis.providers.java.RPCProvider.invokeMethod(RPCProvider.java:372)at org.apache.axis.providers.java.RPCProvider.processMessage(RPCProvider.java:292)at org.apache.axis.providers.java.JavaProvider.invoke(JavaProvider.java:276)at org.apache.axis.strategies.InvocationStrategy.visit(InvocationStrategy.java:71)at org.apache.axis.SimpleChain.doVisiting(SimpleChain.java:156)at org.apache.axis.SimpleChain.invoke(SimpleChain.java:126)at org.apache.axis.handlers.soap.SOAPService.invoke(SOAPService.java:437)at org.apache.axis.server.AxisServer.invoke(AxisServer.java:316)at org.apache.axis.transport.http.AxisServlet.doPost(AxisServlet.java:701)at javax.servlet.http.HttpServlet.service(HttpServlet.java:760)at org.apache.axis.transport.http.AxisServletBase.service(AxisServletBase.java:335)at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:247)at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:193)at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:256)at org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invokeNext(StandardPipeline.java:643)at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:480)at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:995)at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)at org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invokeNext(StandardPipeline.java:643)at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:480)at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:995)at org.apache.catalina.core.StandardContext.invoke(StandardContext.java:2422)at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:180)at org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invokeNext(StandardPipeline.java:643)at org.apache.catalina.valves.ErrorDispatcherValve.invoke(ErrorDispatcherValve.java:171)at org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invokeNext(StandardPipeline.java:641)at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:163)at org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invokeNext(StandardPipeline.java:641)at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:480)at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:995)at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:174)at org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invokeNext(StandardPipeline.java:643)at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:480)at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:995)at org.apache.coyote.tomcat4.CoyoteAdapter.service(CoyoteAdapter.java:199)at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:833)at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.processConnection(Http11Protocol.java:711)at org.apache.tomcat.util.net.TcpWorkerThread.runIt(PoolTcpEndpoint.java:584)at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:687)at java.lang.Thread.run(Thread.java:536)']
    1/17/06 12:37:01- (34432) *** Job Operation #1 failed: ProcessUsers failed (0x4)

    That's the correct place to look for the version.
    My guess at what is happening is that some of users do not have the 'workforceID' attribute and that is causing the AWS to fail when it gets to them. Unfortunately there is not great error logging around this in the 2.0 version of the LDAP AWS. In order to find out if this is indeed the case, and to see what user does not have this attribute, do a trial run with workforceID as the User Login Attribute. This case is caught and reported better.

  • Messaging Server: mail and mailalternativeaddress attributes must be unique

    Email addresses for a given User Directory must have unique mail
    and mailalternativeaddress
    attributes.
    <P>
    Messaging Server uses the following two attributes to search for email
    addresses in the User Directory:<BR>
    <P>
    <OL>
    <LI>mail
    <LI>mailalternativeaddress
    </OL>
    <P>
    Each email address in the User Directory must have unique entries for these
    two attributes. If a given email address appears twice in the User Directory,
    the Messaging Server will display an error message and will not deliver the
    email to either address. In the case where a user wants email that is
    addressed to one email address to go to other email addresses as well (e.g., if
    a user has multiple accounts), you would need to create a group for that user
    and add the other email addresses to the group.
    <P>
    <B>Additional Notes:</B>
    <P>
    <UL>
    <LI>Administration Server 3.x and 4.x have various mechanisms for
    reducing the likelihood of user error when creating email accounts.
    <P>
    <LI>Administrators of Directory Server 4.x servers should also be aware that
    the mail and
    mailalternativeaddress
    attributes must be unique to prevent collisions between those two attributes.
    <P>
    The uid uniqueness plugin maintains uniqueness within attributes, but not across attributes.
    Using this feature for both attributes cannot fully prevent address collisions.
    <BR><P>
    For example, a user could theoretically enter duplicate values for the
    mail and
    mailalternativeaddress attributes:
    <BR>
    <P>
    mail: [email protected]
    <BR>
    <P>
    mailalternativeaddress: [email protected]
    <P>
    Although the server would initially accept this value for both attributes, it
    will post an error message later when it processes an email addressed to this
    address.
    </UL>

    What does “I overwrote the information in Accounts” mean? What exactly did you do? More info, please.
    Deleting a mail account (which i not the same as overwritting the account information) causes all your mail stored in that account’s mailboxes to be removed from the computer as well, and you you were warned that this would happen by an alert similar to this for Mail 2.x (I don’t know what the exact wording is in Mac OS X 10.3):
    Remove Account
    Are you sure you want to remove the <AccountType> account "<AccountName>"?
    This will permanently delete the account setup information, mailboxes, and messages from your computer. Messages stored on the mail server will not be affected.For IMAP-type accounts, this is not a problem because mail is stored on the server. If you deleted a POP account, however, your mail has certainly been wiped out from the computer, not just moved somewhere.
    To prevent that from happening, you should have moved your mail to custom “On My Mac” mailboxes instead of leaving it in the account’s mailboxes. Actually, using the account’s Inbox and Sent mailboxes for archiving purposes is a bad idea regardless — see Overstuffed mailbox is unexpectedly empty.
    If the POP account was configured to leave the messages on the server for some time, setting up the account again would cause any messages still on the server to be downloaded again, just like for IMAP accounts, but this would only work for received mail that has not yet been removed from the server.
    If the messages are not on the server and you don’t have a backup (which you should have made before doing something like that), you may try to salvage as many deleted mbox files as possible with a data recovery tool such as Data Rescue II or FileSalvage (the files to be recovered would be different in the case of Mail 2.x). Stop using your computer right now if you want to try that, as anything you do with the computer may cause the deleted files to be overwritten.

  • Errors in the /users/unison/log/das.log file with  "xitemid not unique".

    When reviewing /users/unison/log/das.log it is showing errors with
    "xitemid not unique":
    <P>
    DATE = Current date
    PID = 12492
    DEXOTEK ERRCODE Ox1800B -> ctldap_ItemGetById: ctldap_LDAPEntryGetById
    DATE = Current date
    PID = 12743
    DEXOTEK ERRCODE Ox1800B -> ctldap_LDAPEntryGetById: xItemId not unique
    <P>
    Also, when comparing the das.log to the directory server access log matching
    the timestamp from the error above, you may see errors such as the
    following:
    [23/Jun/1998:15:45:28 -0700] conn=37 op=66 SRCH base="o=Ace Industries,c=US" scope=2
    filter="(nscalxitemid=10000:00257)"
    [23/Jun/1998:15:45:28 -0700] conn=37 op=66 RESULT err=0 tag=101 nentries=1
    [23/Jun/1998:15:45:28 -0700] conn=37 op=67 BIND dn="uid=nuser1,o=Ace Industries,c=US"
    method=128 version=3
    [23/Jun/1998:15:45:28 -0700] conn=37 op=67 RESULT err=0 tag=97 nentries=0
    [23/Jun/1998:15:45:28 -0700] conn=37 op=68 SRCH base="o=Ace Industries,c=US" scope=2
    filter="(nscalxitemid=10000:*)"
    [23/Jun/1998:15:45:28 -0700] conn=37 op=68 RESULT err=4 tag=101 nentries=2
    in which an err= 4 is LDAP SIZELIMIT EXCEEDED. This means that the search
    resulted in multiple entries with the same ID, so the server does not
    know which value to return.
    <P>
    These log entries might be due to duplicate or multiple nscalxitemid attributes
    in a replicated directory server, i.e., each calendar enabled user and resource
    is suppose to have a single, unique nscalxitemid attribute.
    <P>
    An example attribute entry in the LDAP database of user 00257 on
    node 10000 would be:
    <P>
    nscalxitemid: 10000:00257
    <P>
    To identify these duplicate or multiple nscalxitemid attributes, use the new calendar
    server tool, unidsdiff, which can be downloaded from
    http://help.netscape.com/business/filelib.html#caltools.

    amaltsev1,
    It appears that in the past few days you have not received a response to your
    posting. That concerns us, and has triggered this automated reply.
    Has your problem been resolved? If not, you might try one of the following options:
    - Visit http://support.novell.com and search the knowledgebase and/or check all
    the other self support options and support programs available.
    - You could also try posting your message again. Make sure it is posted in the
    correct newsgroup. (http://forums.novell.com)
    Be sure to read the forum FAQ about what to expect in the way of responses:
    http://forums.novell.com/faq.php
    If this is a reply to a duplicate posting, please ignore and accept our apologies
    and rest assured we will issue a stern reprimand to our posting bot.
    Good luck!
    Your Novell Product Support Forums Team
    http://support.novell.com/forums/

  • Unique's not so unique!

    Hi,
    I'm trying to use the schema validating parser. I'm tweaking the example files that are downloaded with the parser, namely report.xsd and report.xml. In report.xsd it specifies...
    <unique name="pZipCode">
    <selector>
    regions/zip
    </selector>
    <field>
    @code
    </field>
    </unique>
    which hopefully tells the parser each type of regions/zip must have a unique code attribute. The report.xml file has this...
    <regions>
    <zip code="95819">
    <part number="872-AA" quantity="1"/> </zip>
    <zip code="95819">
    <part number="755-KY" quantity="4"/>
    </zip>
    </regions>
    see at regions/zip @code they both should not be able to be 95819, because the schemas specified they be unique. I know in the release notes of the version I'm using the parser doesn't specify unique, but it says that's just for the SAX parser. But I'm using the DOM Parser, so what am I doing wrong. This is a very important feature that I would like to use. If anyone has any clues as to what I'm doing wrong please let me know.
    Thanks.

    Hi sanjay_t,
    From these warning message as you provide, I doubt that maybe the button control did not be found successfully by the searchproperties when you playback the Coded UI test.
    So I suggest you can try to use DrawHighlight Method  to check if the button control
    is found before you perform actions on the button target control.
    (2)I suggest you can try to use
    FindMatchingControls Method on the button control to check if there are more than one control which matches the specified searchproperties and filterproperties.
    In addition, as you said that you are using CUIT to write automated UI test. it means that you record the UI action by handing code, am I right?
    If yes, I suggest that you can create a simple coded UI test through recording actions using Coded UI Test Builder. Then
    generate the code and then try to playback. If playback succeeds, then cross check the properties from UIMap.Designer.cs file. You can reference the code about entering values in the text edit in the UIMap.Designer.cs file to write
    your own code.
    For more information about how to find a control for coded UI test, I suggest you can refer the following blog.
    http://blogs.msdn.com/b/balagans/archive/2009/12/28/9941582.aspx
    Best Regards,
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Getting error when trying to extend standard VO with transient attributes

    Hello,
    I am trying to extend the standard VO ReqSummaryVO in iprocurement module and getting the error "Each Row in the Query Result Columns must be mapped to a unique Query Attribute in the mapped entity columns" at step 4. This VO has a lot of transient attributes. I have gone through solutions from other threads related to this error but none of them worked for me. Can someone help me on this please?
    Thanks,
    Girish.

    Hello,
    I am trying to extend the standard VO ReqSummaryVO in iprocurement module and getting the error "Each Row in the Query Result Columns must be mapped to a unique Query Attribute in the mapped entity columns" at step 4. This VO has a lot of transient attributes. I have gone through solutions from other threads related to this error but none of them worked for me. Can someone help me on this please?
    Thanks,
    Girish.

  • Getting realistic performance expectations.

    I am running tests to see if I can use the Oracle Berkeley XML database as a backend to a web application but am running into query response performance limitations. As per the suggestions for performance related questions, I have pulled together answers to the series of questions that need to be addressed, and they are given below. The basic issue at stake, however, is am I being realistic about what I can expect to achieve with the database?
    Regards
    Geoff Shuetrim
    Oracle Berkeley DB XML database performance.
    Berkeley DB XML Performance Questionnaire
    1. Describe the Performance area that you are measuring? What is the
    current performance? What are your performance goals you hope to
    achieve?
    I am using the database as a back end to a web application that is expected
    to field a large number of concurrent queries.
    The database scale is described below.
    Current performance involves responses to simple queries that involve 1-2
    minute turn around (this improves after a few similar queries have been run,
    presumably because of caching, but not to a point that is acceptable for
    web applications).
    Desired performance is for queries to execute in milliseconds rather than
    minutes.
    2. What Berkeley DB XML Version? Any optional configuration flags
    specified? Are you running with any special patches? Please specify?
    Berkeley DB XML Version: 2.4.16.1
    Configuration flags: enable-java -b 64 prefix=/usr/local/BerkeleyDBXML-2.4.16
    No special patches have been applied.
    3. What Berkeley DB Version? Any optional configuration flags
    specified? Are you running with any special patches? Please Specify.
    Berkeley DB Version? 4.6.21
    Configuration flags: None. The Berkeley DB was built and installed as part of the
    Oracle Berkeley XML database build and installation process.
    No special patches have been applied.
    4. Processor name, speed and chipset?
    Intel Core 2 CPU 6400 @ 2.13 GHz (1066 FSB) (4MB Cache)
    5. Operating System and Version?
    Ubuntu Linux 8.04 (Hardy) with the 2.6.24-23 generic kernel.
    6. Disk Drive Type and speed?
    300 GB 7200RPM hard drive.
    7. File System Type? (such as EXT2, NTFS, Reiser)
    EXT3
    8. Physical Memory Available?
    Memory: 3.8GB DDR2 SDRAM
    9. Are you using Replication (HA) with Berkeley DB XML? If so, please
    describe the network you are using, and the number of Replica’s.
    No.
    10. Are you using a Remote Filesystem (NFS) ? If so, for which
    Berkeley DB XML/DB files?
    No.
    11. What type of mutexes do you have configured? Did you specify
    –with-mutex=? Specify what you find inn your config.log, search
    for db_cv_mutex?
    I did not specify -with-mutex when building the database.
    config.log indicates:
    db_cv_mutex=POSIX/pthreads/library/x86_64/gcc-assembly
    12. Which API are you using (C++, Java, Perl, PHP, Python, other) ?
    Which compiler and version?
    I am using the Java API.
    I am using the gcc 4.2.4 compiler.
    I am using the g++ 4.2.4 compiler.
    13. If you are using an Application Server or Web Server, please
    provide the name and version?
    I am using the Tomcat 5.5 application server.
    It is not using the Apache Portable Runtime library.
    It is being run using a 64 bit version of the Sun Java 1.5 JRE.
    14. Please provide your exact Environment Configuration Flags (include
    anything specified in you DB_CONFIG file)
    I do not have a DB_CONFIG file in the database home directory.
    My environment configuration is as follows:
    Threaded = true
    AllowCreate = true
    InitializeLocking = true
    ErrorStream = System.err
    InitializeCache = true
    Cache Size = 1024 * 1024 * 500
    InitializeLogging = true
    Transactional = false
    TrickleCacheWrite = 20
    15. Please provide your Container Configuration Flags?
    My container configuration is done using the Java API.
    The container creation code is:
    XmlContainerConfig containerConfig = new XmlContainerConfig();
    containerConfig.setStatisticsEnabled(true);
    XmlContainer container = xmlManager.createContainer("container",containerConfig);I am guessing that this means that the only flag I have set is the one
    that enables recording of statistics to use in query optimization.
    I have no other container configuration information to provide.
    16. How many XML Containers do you have?
    I have one XML container.
    The container has 2,729,465 documents.
    The container is a node container rather than a wholedoc container.
    Minimum document size is around 1Kb.
    Maximum document size is around 50Kb.
    Average document size is around 2Kb.
    I am using document data as part of the XQueries being run. For
    example, I condition query results upon the values of attributes
    and elements in the stored documents.
    The database has the following indexes:
    xmlIndexSpecification = dataContainer.getIndexSpecification();
    xmlIndexSpecification.replaceDefaultIndex("node-element-presence");
    xmlIndexSpecification.addIndex(Constants.XBRLAPINamespace,"fragment","node-element-presence");
    xmlIndexSpecification.addIndex(Constants.XBRLAPINamespace,"data","node-element-presence");
    xmlIndexSpecification.addIndex(Constants.XBRLAPINamespace,"xptr","node-element-presence");
    xmlIndexSpecification.addIndex("","stub","node-attribute-presence");
    xmlIndexSpecification.addIndex("","index", "unique-node-attribute-equality-string");
    xmlIndexSpecification.addIndex(Constants.XBRL21LinkNamespace,"label","node-element-substring-string");
    xmlIndexSpecification.addIndex(Constants.GenericLabelNamespace,"label","node-element-substring-string");
    xmlIndexSpecification.addIndex("","name","node-attribute-substring-string");
    xmlIndexSpecification.addIndex("","parentIndex", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","uri", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","type", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","targetDocumentURI", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","targetPointerValue", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","absoluteHref", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","id","node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","value", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","arcroleURI", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","roleURI", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","name", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","targetNamespace", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","contextRef", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","unitRef", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","scheme", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","value", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex(Constants.XBRL21Namespace,"identifier", "node-element-equality-string");           
    xmlIndexSpecification.addIndex(Constants.XMLNamespace,"lang","node-attribute-equality-string");
    xmlIndexSpecification.addIndex(Constants.XLinkNamespace,"label","node-attribute-equality-string");
    xmlIndexSpecification.addIndex(Constants.XLinkNamespace,"from","node-attribute-equality-string");
    xmlIndexSpecification.addIndex(Constants.XLinkNamespace,"to","node-attribute-equality-string");
    xmlIndexSpecification.addIndex(Constants.XLinkNamespace,"type","node-attribute-equality-string");
    xmlIndexSpecification.addIndex(Constants.XLinkNamespace,"arcrole","node-attribute-equality-string");
    xmlIndexSpecification.addIndex(Constants.XLinkNamespace,"role","node-attribute-equality-string");
    xmlIndexSpecification.addIndex(Constants.XLinkNamespace,"label","node-attribute-equality-string");
    xmlIndexSpecification.addIndex(Constants.XBRLAPILanguagesNamespace,"language","node-element-presence");
    xmlIndexSpecification.addIndex(Constants.XBRLAPILanguagesNamespace,"code","node-element-equality-string");
    xmlIndexSpecification.addIndex(Constants.XBRLAPILanguagesNamespace,"value","node-element-equality-string");
    xmlIndexSpecification.addIndex(Constants.XBRLAPILanguagesNamespace,"encoding","node-element-equality-string");17. Please describe the shape of one of your typical documents? Please
    do this by sending us a skeleton XML document.
    The following provides the basic information about the shape of all documents
    in the data store.
    <ns:fragment xmlns:ns="..." attrs...(about 20 of them)>
      <ns:data>
        Single element that varies from document to document but that
        is rarely more than a few small elements in size and (in some cases)
        a lengthy section of string content for the single element.
      </ns:data>
    </ns:fragment>18. What is the rate of document insertion/update required or
    expected? Are you doing partial node updates (via XmlModify) or
    replacing the document?
    Document insertion rates are not a first order performance criteria.
    I do no document modifications using XmlModify.
    When doing updates I replace the original document.
    19. What is the query rate required/expected?
    Not sure how to provide metrics for this but a single web page is
    being generated, this can involve hundreds of queries. each of which
    should be trivial to execute given the indexing strategy in use.
    20. XQuery -- supply some sample queries
    1. Please provide the Query Plan
    2. Are you using DBXML_INDEX_NODES?
              I am using DBXML_INDEX_NODES by default because I
              am using a node container rather than a whole document
              container.
    3. Display the indices you have defined for the specific query.
    4. If this is a large query, please consider sending a smaller
    query (and query plan) that demonstrates the problem.
    Example queries.
    1. collection('browser')/*[@parentIndex='none']
    <XQuery>
      <QueryPlanToAST>
        <LevelFilterQP>
          <StepQP axis="parent-of-attribute" uri="*" name="*" nodeType="element">
            <ValueQP container="browser" index="node-attribute-equality-string" operation="eq" child="parentIndex" value="none"/>
          </StepQP>
        </LevelFilterQP>
      </QueryPlanToAST>
    </XQuery>2. collection('browser')/*[@stub]
    <XQuery>
      <QueryPlanToAST>
        <LevelFilterQP>
          <StepQP axis="parent-of-attribute" uri="*" name="*" nodeType="element">
            <PresenceQP container="browser" index="node-attribute-presence-none" operation="eq" child="stub"/>
          </StepQP>
        </LevelFilterQP>
      </QueryPlanToAST>
    </XQuery>3. qplan "collection('browser')/*[@type='org.xbrlapi.impl.ConceptImpl' or @parentIndex='asdfv_3']"
    <XQuery>
      <QueryPlanToAST>
        <LevelFilterQP>
          <StepQP axis="parent-of-attribute" uri="*" name="*" nodeType="element">
            <UnionQP>
              <ValueQP container="browser" index="node-attribute-equality-string" operation="eq" child="type" value="org.xbrlapi.impl.ConceptImpl"/>
              <ValueQP container="browser" index="node-attribute-equality-string" operation="eq" child="parentIndex" value="asdfv_3"/>
            </UnionQP>
          </StepQP>
        </LevelFilterQP>
      </QueryPlanToAST>
    </XQuery>4.
    setnamespace xlink http://www.w3.org/1999/xlink
    qplan "collection('browser')/*[@uri='http://www.xbrlapi.org/my/uri' and */*[@xlink:type='resource' and @xlink:label='description']]"
    <XQuery>
      <QueryPlanToAST>
        <LevelFilterQP>
          <NodePredicateFilterQP uri="" name="#tmp8">
            <StepQP axis="parent-of-child" uri="*" name="*" nodeType="element">
              <StepQP axis="parent-of-child" uri="*" name="*" nodeType="element">
                <NodePredicateFilterQP uri="" name="#tmp1">
                  <StepQP axis="parent-of-attribute" uri="*" name="*" nodeType="element">
                    <ValueQP container="browser" index="node-attribute-equality-string" operation="eq" child="label:http://www.w3.org/1999/xlink"
                    value="description"/>
                  </StepQP>
                  <AttributeJoinQP>
                    <VariableQP name="#tmp1"/>
                    <ValueQP container="browser" index="node-attribute-equality-string" operation="eq" child="type:http://www.w3.org/1999/xlink"
                    value="resource"/>
                  </AttributeJoinQP>
                </NodePredicateFilterQP>
              </StepQP>
            </StepQP>
            <AttributeJoinQP>
              <VariableQP name="#tmp8"/>
              <ValueQP container="browser" index="node-attribute-equality-string" operation="eq" child="uri" value="http://www.xbrlapi.org/my/uri"/>
            </AttributeJoinQP>
          </NodePredicateFilterQP>
        </LevelFilterQP>
      </QueryPlanToAST>
    </XQuery>21. Are you running with Transactions? If so please provide any
    transactions flags you specify with any API calls.
    I am not running with transactions.
    22. If your application is transactional, are your log files stored on
    the same disk as your containers/databases?
    The log files are stored on the same disk as the container.
    23. Do you use AUTO_COMMIT?
    Yes. I think that it is a default feature of the DocumentConfig that
    I am using.
    24. Please list any non-transactional operations performed?
    I do document insertions and I do XQueries.
    25. How many threads of control are running? How many threads in read
    only mode? How many threads are updating?
    One thread is updating. Right now one thread is running queries. I am
    not yet testing the web application with concurrent users given the
    performance issues faced with a single user.
    26. Please include a paragraph describing the performance measurements
    you have made. Please specifically list any Berkeley DB operations
    where the performance is currently insufficient.
    I have loaded approximately 7 GB data into the container and then tried
    to run the web application using that data. This involves running a broad
    range of very simple queries, all of which are expected to be supported
    by indexes to ensure that they do not require XML document traversal activity.
    Querying performance is insufficient, with even the most basic queries
    taking several minutes to complete.
    27. What performance level do you hope to achieve?
    I hope to be able to run a web application that simultaneously handles
    page requests from hundreds of users, each of which involves a large
    number of database queries.
    28. Please send us the output of the following db_stat utility commands
    after your application has been running under "normal" load for some
    period of time:
    % db_stat -h database environment -c
    1038     Last allocated locker ID
    0x7fffffff     Current maximum unused locker ID
    9     Number of lock modes
    1000     Maximum number of locks possible
    1000     Maximum number of lockers possible
    1000     Maximum number of lock objects possible
    155     Number of current locks
    157     Maximum number of locks at any one time
    200     Number of current lockers
    200     Maximum number of lockers at any one time
    13     Number of current lock objects
    17     Maximum number of lock objects at any one time
    1566M     Total number of locks requested (1566626558)
    1566M     Total number of locks released (1566626403)
    0     Total number of locks upgraded
    852     Total number of locks downgraded
    3     Lock requests not available due to conflicts, for which we waited
    0     Lock requests not available due to conflicts, for which we did not wait
    0     Number of deadlocks
    0     Lock timeout value
    0     Number of locks that have timed out
    0     Transaction timeout value
    0     Number of transactions that have timed out
    712KB     The size of the lock region
    21807     The number of region locks that required waiting (0%)
    % db_stat -h database environment -l
    0x40988     Log magic number
    13     Log version number
    31KB 256B     Log record cache size
    0     Log file mode
    10Mb     Current log file size
    0     Records entered into the log
    28B     Log bytes written
    28B     Log bytes written since last checkpoint
    1     Total log file I/O writes
    0     Total log file I/O writes due to overflow
    1     Total log file flushes
    0     Total log file I/O reads
    1     Current log file number
    28     Current log file offset
    1     On-disk log file number
    28     On-disk log file offset
    1     Maximum commits in a log flush
    0     Minimum commits in a log flush
    96KB     Log region size
    0     The number of region locks that required waiting (0%)
    % db_stat -h database environment -m
    500MB     Total cache size
    1     Number of caches
    1     Maximum number of caches
    500MB     Pool individual cache size
    0     Maximum memory-mapped file size
    0     Maximum open file descriptors
    0     Maximum sequential buffer writes
    0     Sleep after writing maximum sequential buffers
    0     Requested pages mapped into the process' address space
    1749M     Requested pages found in the cache (99%)
    722001     Requested pages not found in the cache
    911092     Pages created in the cache
    722000     Pages read into the cache
    4175142     Pages written from the cache to the backing file
    1550811     Clean pages forced from the cache
    19568     Dirty pages forced from the cache
    3     Dirty pages written by trickle-sync thread
    62571     Current total page count
    62571     Current clean page count
    0     Current dirty page count
    65537     Number of hash buckets used for page location
    1751M     Total number of times hash chains searched for a page (1751388600)
    8     The longest hash chain searched for a page
    3126M     Total number of hash chain entries checked for page (3126038333)
    4535     The number of hash bucket locks that required waiting (0%)
    278     The maximum number of times any hash bucket lock was waited for (0%)
    1     The number of region locks that required waiting (0%)
    0     The number of buffers frozen
    0     The number of buffers thawed
    0     The number of frozen buffers freed
    1633189     The number of page allocations
    4301013     The number of hash buckets examined during allocations
    259     The maximum number of hash buckets examined for an allocation
    1570522     The number of pages examined during allocations
    1     The max number of pages examined for an allocation
    184     Threads waited on page I/O
    Pool File: browser
    8192     Page size
    0     Requested pages mapped into the process' address space
    1749M     Requested pages found in the cache (99%)
    722001     Requested pages not found in the cache
    911092     Pages created in the cache
    722000     Pages read into the cache
    4175142     Pages written from the cache to the backing file
    % db_stat -h database environment -r
    Not applicable.
    % db_stat -h database environment -t
    Not applicable.
    vmstat
    r b swpd free buff cache si so bi bo in cs us sy id wa
    1 4 40332 773112 27196 1448196 0 0 173 239 64 1365 19 4 72 5
    iostat
    Linux 2.6.24-23-generic (dell)      06/02/09
    avg-cpu: %user %nice %system %iowait %steal %idle
    18.37 0.01 3.75 5.67 0.00 72.20
    Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
    sda 72.77 794.79 1048.35 5376284 7091504
    29. Are there any other significant applications running on this
    system? Are you using Berkeley DB outside of Berkeley DB XML?
    Please describe the application?
    No other significant applications are running on the system.
    I am not using Berkeley DB outside of Berkeley DB XML.
    The application is a web application that organises the data in
    the stored documents into hypercubes that users can slice/dice and analyse.
    Edited by: Geoff Shuetrim on Feb 7, 2009 2:23 PM to correct the appearance of the query plans.

    Hi Geoff,
    Thanks for filling out the performance questionnaire. Unfortunately the forum software seems to have destroyed some of your queries - you might want to use \[code\] and \[code\] to markup your queries and query plans next time.
    Geoff Shuetrim wrote:
    Current performance involves responses to simple queries that involve 1-2
    minute turn around (this improves after a few similar queries have been run,
    presumably because of caching, but not to a point that is acceptable for
    web applications).
    Desired performance is for queries to execute in milliseconds rather than
    minutes.I think that this is a reasonable expectation in most cases.
    14. Please provide your exact Environment Configuration Flags (include
    anything specified in you DB_CONFIG file)
    I do not have a DB_CONFIG file in the database home directory.
    My environment configuration is as follows:
    Threaded = true
    AllowCreate = true
    InitializeLocking = true
    ErrorStream = System.err
    InitializeCache = true
    Cache Size = 1024 * 1024 * 500
    InitializeLogging = true
    Transactional = false
    TrickleCacheWrite = 20If you are performing concurrent reads and writes, you need to enable transactions in the both the environment and the container.
    Example queries.
    1. collection('browser')/*[@parentIndex='none']
    <XQuery>
    <QueryPlanToAST>
    <LevelFilterQP>
    <StepQP axis="parent-of-attribute" uri="*" name="*" nodeType="element">
    <ValueQP container="browser" index="node-attribute-equality-string" operation="eq" child="parentIndex" value="none"/>
    </StepQP>
    </LevelFilterQP>
    </QueryPlanToAST>
    </XQuery>
    I have two initial observations about this query:
    1) It looks like it could return a lot of results - a query that returns a lot of results will always be slow. If you only want a subset of the results, use lazy evalulation, or put an explicit call to the subsequence() function in the query.
    2) An explicit element name with an index on it often performs faster than a "*" step. I think you'll get faster query execution if you specify the document element name rather than "*", and then add a "node-element-presence" index on it.
    3) Generally descendant axis is faster than child axis. If you just need the document rather than the document (root) element, you might find that this query is a little faster (any document with a "parentIndex" attribute whose value is "none"):
    collection()[descendant::*/@parentIndex='none']Similar observations apply to the other queries you posted.
    Get back to me if you're still having problems with specific queries.
    John

  • VO extension error:  FND, Message Name: FND_VIEWOBJECT_NOT_FOUND

    All, I am try to extend a VO in isupplier page to display some additional fields in 12.1.3
    While following the regular process, I got the error "each row in the query result columns must be mapped to a unique query attribute in the mapped entity columns".
    So, I created a new VO with out any changes and then I added a new attribute and changed the sql of the extended VO ,  then did the substitution and deployed the
    files and bounced the app server.
    Then, in the about this page section I do  see my new extended VO in place of the original VO and I do the new attribute. So far so good.
    Then I tried to personalize the page to add the new filed to the page. Then I am getting the below error:
    "Message not found. Application: FND, Message Name: FND_VIEWOBJECT_NOT_FOUND.  Tokens: VONAME = VendorsVO; APPLICATION_MODULE =
    oracle.apps.pos.supplier.server.ByrSuppAM"
    What I don't understand is why is the AM looking for the old VO? The name of my new VO is xxVendorsVO.
    Can some one please help me? I am have hit a dead end here.
    Thanks.

    Hi,
    After VO extension when you adding new field in the page by personalization. please make sure Your viewinstance should be VendorsVO (i.e StandarVO)
    Dilip'S Oracle Application Framework Blogs: Deploying Your ViewObject Extensions
    Regards,
    Dilip

  • OIM 11g R2 -AD Provisioning Error

    Hi,
    We have configured AD connector server. When we try to provision the user with AD account we get:
    Target Class = oracle.iam.connectors.icfcommon.prov.ICProvisioningManager
    <Nov 14, 2012 10:05:40 PM PST> <Error> <ORACLE.IAM.CONNECTORS.ICFCOMMON.PROV.ICPROVISIONINGMANAGER> <BEA-000000> <oracle.iam.connectors.icfcommon.prov.ICProvisioningManager : createObject : Error while creating user
    java.lang.IllegalArgumentException: Parameter 'name' must not be blank.
    at org.identityconnectors.common.Assertions.blankCheck(Assertions.java:90)
    at oracle.iam.connectors.icfcommon.service.oim9.OIM9Configuration.getConfiguration(OIM9Configuration.java:139)
    I can see that all the mandatory fields are pre-populated except the Unique ID attribute -is this could be the issue, if yes then how do we handle this. I can see there are no events logged at the connector server end for this provisioning process attempt.
    We have reconciled Groups, Organization successfully using connector server.
    Can anyone help on this asap..!
    Thanks

    Unique ID attribute is ObjectGUID which I think would be autogenerated. I can see that my user id , OU and other mandatory attributes are populated on the process form, but still I am facing this issue.
    it is throwing this error soon after CREATEOBJECT is invoked.
    Thanks again

  • XML file parse issue

    I have a requirement to print <attribute-override> and <column> in a spreadsheet.
    My xml file is as
    <?xml version="1.0" encoding="UTF-8"?>
    <entity-mappings xmlns="http://www.eclipse.org/eclipselink/xsds/persistence/orm" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.eclipse.org/eclipselink/xsds/persistence/orm http://www.eclipse.org/eclipselink/xsds/eclipselink_orm_2_4.xsd">
      <entity class="com.ofss.fc.domain.account.entity.accountcreditmatrix.CreditMetricDetails">
        <table name="FLX_AC_ACCT_CREDIT_MATRIX_DTLS"/>
        <attributes>
          <embedded-id attribute-type="com.ofss.fc.domain.account.entity.accountcreditmatrix.CreditMetricDetailsKey" name="key">
            <attribute-override name="accountId">
              <column name="ACCOUNT_ID"/>
            </attribute-override>
            <attribute-override name="accountType">
              <column name="ACCOUNT_TYPE"/>
            </attribute-override>
            <attribute-override name="effectiveDate">
              <column name="EFFECTIVE_DATE"/>
            </attribute-override>
            <attribute-override name="matrixIdvalue">
              <column name="MATRIX_ID_VALUE"/>
            </attribute-override>
            <attribute-override name="classification">
              <column name="Classification"/>
            </attribute-override>
          </embedded-id>
          <embedded attribute-type="com.ofss.fc.domain.account.entity.accountcreditmatrix.CreditMetric" name="creditMetric">
            <attribute-override name="metricType">
              <column name="METRIC_TYPE" unique="false"/>
            </attribute-override>
            <attribute-override name="metricValue">
              <column name="METRIC_VALUE" unique="false"/>
            </attribute-override>
          </embedded>
          <embedded attribute-type="com.ofss.fc.domain.account.entity.accountcreditmatrix.RiskScore" name="riskScore">
            <attribute-override name="scoreType">
              <column name="SCORE_TYPE" unique="false"/>
            </attribute-override>
            <attribute-override name="otherScoreType">
              <column name="OTHER_SCORE_TYPE" unique="false"/>
            </attribute-override>
            <attribute-override name="scoreCardExternalReferenceNo">
              <column name="SCORE_EXTR_REF_NO" unique="false"/>
            </attribute-override>
            <attribute-override name="ratingModel">
              <column name="RATING_MODEL" unique="false"/>
            </attribute-override>
            <attribute-override name="ratingStatus">
              <column name="RATING_STATUS" unique="false"/>
            </attribute-override>
            <attribute-override name="riskGrade">
              <column name="RISK_GRADE" unique="false"/>
            </attribute-override>
            <attribute-override name="scoreCardIndex">
              <column name="SCORE_CARD_INDEX" unique="false"/>
            </attribute-override>
            <attribute-override name="score">
              <column name="SCORE" unique="false"/>
            </attribute-override>
          </embedded>
        </attributes>
      </entity>
    </entity-mappings>
    I have managed to write the code as
    package xmlexcel;
    import org.apache.poi.hssf.usermodel.*;
    import java.util.ArrayList;
    import java.awt.List;
    import java.io.*;
    import java.util.ArrayList;
    import javax.xml.parsers.DocumentBuilder;
    import javax.xml.parsers.DocumentBuilderFactory;
    import javax.xml.parsers.ParserConfigurationException;
    import org.xml.sax.SAXException;
    import org.apache.poi.hssf.usermodel.HSSFCell;
    import org.apache.poi.hssf.usermodel.HSSFRow;
    import org.apache.poi.hssf.usermodel.HSSFSheet;
    import org.apache.poi.hssf.usermodel.HSSFWorkbook;
    import org.w3c.dom.Document;
    import org.w3c.dom.Element;
    import org.w3c.dom.Node;
    import org.w3c.dom.NodeList;
    public class XMLconvertExcel {
      private static File xmlDocument;
        private static NodeList e;
        int a;
      public void generateExcel(File xmlDocument) {
      try {
      HSSFWorkbook wb = new HSSFWorkbook();
      HSSFSheet spreadSheet = wb.createSheet("spreadSheet");
      spreadSheet.setColumnWidth((short)0,(short) (256*25));
      spreadSheet.setColumnWidth((short)1,(short) (256*25));
      spreadSheet.setColumnWidth((short)2,(short) (256*25));
      spreadSheet.setColumnWidth((short)3,(short) (256*25));
      DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
      DocumentBuilder builder = factory.newDocumentBuilder();
      Document document = builder.parse(xmlDocument);
      NodeList nList = document.getElementsByTagName("attributes");
      document.getDocumentElement().normalize();
      //a=nodelist.getLength();
      //e = printStackTrace();
      //System.out.println("I am here " +e);
             System.out.println("Root element :" + document.getDocumentElement().getNodeName() + " nlist length  " +nList.getLength());
             System.out.println("Node Type :" + document.getDocumentElement().getNodeType());
      HSSFRow row = spreadSheet.createRow(0);
      HSSFCell cell = row.createCell((short)0);
      cell.setCellValue("Entity");
      cell = row.createCell((short)1);
      cell.setCellValue("Table");
      cell = row.createCell((short)2);
      cell.setCellValue("Attribute");
      cell = row.createCell((short)3);
      cell.setCellValue("Column");
      HSSFRow row1 = spreadSheet.createRow(1);
      HSSFRow row2 = spreadSheet.createRow(2);
      HSSFRow row3 = spreadSheet.createRow(3);
      for (int i = 0; i < nList.getLength(); i++) {
      Node nNode = nList.item(i);
                 System.out.println("\nCurrent Element :"    + nNode.getNodeName());
                 switch {
      case 0:
      //cell = row1.createCell((short)0);
      //cell.setCellValue("Attribute");
      //trying from http://architects.dzone.com/articles/parsing-xml-using-dom-sax-and
      cell = row1.createCell((short) 2);
                    cell.setCellValue(((Element) (nList.item(0)))
                       .getElementsByTagName("attribute-override").item(0)
                       .getFirstChild().getNodeValue());
      break;
      case 1:
      //cell = row1.createCell((short)1);
      //cell.setCellValue("Table");
      cell = row1.createCell((short) 3);
      cell.setCellValue(((Element) (nList.item(0)))
      .getElementsByTagName("column").item(0)
      .getFirstChild().getNodeValue());
      break;
      case 2:
      cell = row1.createCell((short)2);
      cell.setCellValue("Attribute");
      cell = row1.createCell((short) 2);
      cell.setCellValue(((Element) (nodelist.item(2)))
      .getElementsByTagName("attribute-override").item(0)
      .getFirstChild().getNodeValue());
      cell = row1.createCell((short)3);
      cell.setCellValue("Column");
      cell = row1.createCell((short) 3);
      cell.setCellValue(((Element) (nodelist.item(3)))
      .getElementsByTagName("column").item(0)
      .getFirstChild().getNodeValue());
      break;
      default:
      break;
      //wb.write(arg1.getOutputPayload().getOutputStream());
      //Outputting to Excel spreadsheet
      FileOutputStream output = new FileOutputStream(new File("C:\\java_training\\com\\XMLtoExcel\\ormaccount.xls"));
             wb.write(output);
             output.flush();
             output.close();
      } catch (IOException e) {
      System.out.println("IOException " + e.getMessage());
      } catch (ParserConfigurationException e) {
      System.out.println("ParserConfigurationException " +e.getMessage());
      }catch (SAXException e) {
      System.out.println("SAXException " +e.getMessage());
      private String printStackTrace() {
      // TODO Auto-generated method stub
      return null;
      * @param args
      public static void main(String[] args) {
      File xmlDocument = new File("C:\\java_training\\com\\XMLtoExcel\\AccountCreditMatrixDetails.orm.xml");
      XMLconvertExcel excel = new XMLconvertExcel();
      excel.generateExcel(xmlDocument);
    Both the tags are not getting printed in separate columns.
    I have looked at
    http://www.javaworld.com/article/2076189/enterprise-java/book-excerpt--converting-xml-to-spreadsheet--and-vice-versa.html
    http://scn.sap.com/thread/3224533
    http://www.tutorialspoint.com/java_xml/java_dom_parse_document.htm
    The above URL shows example of simple xml.
    Please can I get assistance.

    I too received this error as I tried to run my first Windows 8.1 deployment. Per another post I commented out this line
    <IEWelcomeMsg>false</IEWelcomeMsg>
    from the IE section of the unattend.xml. I was then able to run my deployment. I do not see this line in your posting though.
    I referenced this link even though it was for Windows 7.
    http://social.technet.microsoft.com/Forums/en-US/c41a2b69-a591-4cd3-86ab-6a0f8a73b858/getting-windows-could-not-parse-or-process-the-unattend-answer-file-for-pass-specialize-with?forum=mdt
    Hope this helps someone.
    JayTheTech
    To clarify, I edited the unattend.xml file from from Deployment Share, not C:\Windows\Panther.
    DS\control\task sequence ID\unattend.xml
    JayTheTech

  • Retrieving specific document using explicit index lookup

    The best way to retrieve a specific document seems to be using XmlContainer::getDocument, thus using the default index.
    However, I may not have the document name, but instead an ID I have defined a unique index for.
    Of course, I can use the XQuery interface to retrieve the document, and it's fast.
    It's faster, though, to directly use an index. I can use XmlContainer::lookupIndex in order to do this; it takes an optional parameter "value", and the documentation for 2.3.10 says: "Provides the value to which equality indices must be equal. This parameter is required when returning documents on equality indices, and it is ignored for all other types of indices."
    However, the documentation also says that XmlContainer::lookupIndex is deprecated, "in favor of using XmlManager::createIndexLookup and XmlIndexLookup::execute". That recommended approach does not seem to provide a way to look up a specific value. It seems I have to iterate over the index, which is not what I want. Am I missing something?
    Michael Ludwig

    George,
    thanks a lot - should have found this myself, I guess. Working example following:
    use strict;
    use warnings;
    use Sleepycat::DbXml 'simple';
    my $id = shift or die "usage: $0 <ID>\n";
    my $contfile = $ENV{HOME} . '/dbenv/tv.dbxml';
    my $iuri = ''; my $iname = 'ID';
    my $istrat = 'unique-node-attribute-equality-decimal';
    eval {
        my $mgr = XmlManager->new;
        my $cont = $mgr->openContainer($contfile, Db::DB_RDONLY);
        my $sval = XmlValue->new(XmlValue::DECIMAL, $id);
        my $idx = $mgr->createIndexLookup($cont, $iuri, $iname, $istrat, $sval);
        my $ctx = $mgr->createQueryContext(
            XmlQueryContext::LiveValues,
            XmlQueryContext::Lazy);
        my $res = $idx->execute($ctx);
        while ($res->hasNext) {
            $res->next(my $val);
            print $val;
    my $ex;
    if ($ex = catch XmlException) {
        die join "\n", ref $ex, $ex->what,
            'Exception Code: ' . $ex->getExceptionCode,
            'DbErrno: ' . $ex->getDbErrno;
    elsif ($ex = catch std::exception) { die join "\n", ref $ex, $ex->what }
    elsif ($@)                         { die $@ }For an alternative approach using setLowBound(), apply the following diff:
    12c12,13
    <     my $idx = $mgr->createIndexLookup($cont, $iuri, $iname, $istrat, $sval);
    my $idx = $mgr->createIndexLookup($cont, $iuri, $iname, $istrat);
    $idx->setLowBound( $sval, XmlIndexLookup::EQ );Thanks,
    Michael

  • *******Problem in VO Extension .........(Not Eo based VO)

    Hi all ,
    I need to display the new columnin my page ,so i am moving to extend the VO ,whicjh is not entity based ,,,,,
    I followed all the steps in jdeveloper,,
    after adding extended vo in the first step,,,,
    i am getting the parent vo query in the Second Step ,,,,and then moving to third step
    it shows create Variable ,,,,,,,clicked on next ,,,,,then in the it displaying the Error ,,,,,
    Each row in the Query Result Columns must be mapped to a Unique Query Attribute in the Mapped Entity Coulmns.
    please suggest me how to overcome this
    Thanks in advance
    Kashyap

    Hi ,
    you might be extending the VO to add new attribute on VO so that you can display a new field on the screen right? ,because in your scenario when u reach to parent vo query in the second Step there u need to modify the query by adding a new required column in select statement ,and then in next step it will generate the view attribute by itself ,if you dont require to make changes in query then there is no purpose of VO extesnion which is not based on EO bcz all attributes will be available as usual .So please modify the query only to get the new view attribute ,all other things will be take care by itself.
    Thanks
    Pratap

  • How to get process instance activity information

    We have the following need:
    The process instance will be created either manually or by integration from another system. It is not possible to have two instances running for the same process, so we want to check if there is already one instance running for a given process. The business information that identifies the process uniquely are attributes of the process data object.
    We need to find out how to:
    1) check if there is already existing instance for the process giving the business information that identifies it
    2) check if the existing instance is running or is ended/terminated
    We want to know if there is any existing API we may use to achieve the needs above, what are them and where we can find more information of their use.
    We have tried searching in the API documentation what methods could be of use but we were not able to identify by the existing documentation any one that could attend our needs.
    We also didn't find any topic in this forum about it.
    If possible please give us some examples of use.
    Version: Oracle BPM 11g

    Hi,
    In the SOA Suite 11g you can use the Java API together with Composite Sensor to achieve these behavior:
    1. First, create a composite sensor in the operation that create the process composite instance, initializing the sensor with the business information that identifies the process uniquely. This composite sensor can be used to search for the composite instance either on EM or via API. For more information about how to create Composite Sensor, see http://download.oracle.com/docs/cd/E14571_01/integration.1111/e10224/sca_compsensors.htm#insertedID0
    2. Using the Java API (http://download.oracle.com/docs/cd/E14571_01/apirefs.1111/e10659/toc.htm), search for the composite instance by the Composite Sensor, as explained at: http://blogs.oracle.com/soabpm/2009/08/soa_suite_11g_-apipart_3_-_f.html. With a reference to the composite instance, you can check its internal components states.
    The following libs must be imported in your java project:
    - <middleware_home>/oracle_common/modules/oracle.fabriccommon_11.1.1/fabric-common.jar
    - <middleware_home>/oracle_common/soa/modules/oracle.soa.mgmt_11.1.1/soa-infra-mgmt.jar
    - <middleware_home>/wlserver_10.3/server/lib/weblogic.jar
    - <middleware_home>/oss_11gr1/soa/modules/oracle.soa.fabric_11.1.1/oracle-soa-client-api.jar
    - <middleware_home>/oracle_common/webservices/wsclient_extended.jar
    I hope this can help.
    Regards,
    Rafael

  • VO Extention error

    Hi,
    i am using apps 12.1.3.i want to create a tengient varible by extending the VO.but when i extend the vo at 4 stape(attribute mapping) gives the error.
    The error is:*Each row in the Query Result Columns must be mapped to a unique Query Attribute in the mapped Entity columns*
    Please help on this.
    Regards
    Rabindra

    Hi
    Please follow below link for VO extension
    http://oracleanil.blogspot.com/2010/11/eo-based-vo-extension-in-oaf-r12.html
    Please note : Above link shows a VO based on EO. In your case if its a VO based on sql statement. Then, when you would be adding the attributes it would be created as transient and there is no need to go for changing XML that is given in above link.
    For rest you can follow above link.
    Thanks
    AJ

  • How to customize Database Tables Application Connector to OIM

    Hi,
    Anybody know whether is it possible to customize Database Tables Connector that it could be used to make reconciliation to any table in OIM schema (not only to USR table, that is default)?
    I have reviewed config xml files but did not found any entry that could be used to achieve this.
    Thanks in advance,
    Maciej.
    Edited by: maciej.mac on Oct 12, 2009 11:58 PM

    Hi,
    The custom DB App table connector is used to reconcile identities or account information from the target tables. So when you create a GTC connector this process will create all OIM objects for you (Resource Object, Process, Recon rules etc).
    You need to look into deploymenmt document for 'GTC DBApp Tables Connector'. look into the following link for section : "3 Creating the Connector".
    Link : [http://download.oracle.com/docs/cd/E11223_01/doc.910/e11194/create.htm#CIABIJCH]
    Remember as not to create this as connector for 'trusted mode'. This connector will replicate all the tables from the target to OIM and reconcile data what you want to these tables. But this data will behave as a 'Account for the User', so you need to make sure that you have some unique matching attribute at the target tables as well to link this account to OIM user.
    I recommend you to go thorugh the deployment doc and see what is created in OIM. This will make it more explicit.

Maybe you are looking for

  • How do I reset airport extreme A1408 to use on different network after buying new Extreme?

    I have just bought and set up new Airport ExtremeA1521 and want to take my old one and set up at son's.  It is A1408.  Will doing a reset do all I need to do?  And then I want to hook up an Express to it to extend his network. Thanks.

  • Using shared objects to create bookmarks in a course module

    I'm relatively new to Flash ActionScripting so I'm looking for a little help from those who live and breathe the code. I've been banging my head around for weeks trying to get this function to work in my module (books, tutorials, internet etc...). My

  • How can I delete uploaded digital videos from iCloud?

    I have uploaded many digital videos that I have purchased and they have taken up almost all of my icloud space.  How can I delete them to free up space in icloud?  There's no more room to back up my devices.

  • EQ presets question

    iTunes 8 - is it possible to set equalizer presets so that specific presets only affect the song to which they are applied? I want certain EQ setting ONLY for certain songs, and the rest to default to NO SETTING . . . yet after a song plays with an E

  • List order of related items

    is there a way to control the order that the items are displayed in related items (e.g. opportunities under a contact...). Is the default sort by modification date?