ICF Connector Must Understand check failed

We are building an ICF connetor, and need to send the user credential and timestamp using WS-Security. We have created a project in JDeveloper. And after deployed it and tested in OIM EM console, we got the error message: oracle.sysman.emSDK.webservices.wsdlapi.SoapTestException: Client received SOAP Fault from server : Must Understand check failed for headers.
The options we have tried is adding a ws-security policy in composite.xml and adding username and password for binding properties and the protocol used is SOAP 1.2.

Hi Delhi,
I saw the note 1161907.1, but It applies for Child OU creation.
In our case the pObjDN is incomplete.

Similar Messages

  • ICF Connector Error: ObjectSerializer ClassNotFoundException

    I installed an ICF based connecor DBUM-11.1.1.6.0, which worked perfectly fine.
    Then i built up my custom connector with ICF Framework for flat file, which was not allowing me to compile the adpater due to long type parameters in the adapter parameters. Later i came to know that long variable was not getting listed as it was a bug in OIM and it got resolved after applying the patch BP02.
    but i am facing another problem after applying BP02 patch (p14760806_111200_Generic) on OIM.
    I am able to compile the adapter and configure the connector after applying the patch mentioned above, but facing the below error when provisioning the account:
    java.lang.ClassNotFoundException: org.identityconnectors.framework.impl.serializer.ObjectSerializer
    The same exception is now appearing when i am provisioning the account with DBUM-11.1.1.6.0 connector in the database as well as in my custom ICF Adapter.
    I checked that the following jars are in the class path:
    icf-oim-intg.jar
    connector-framework-internal.jar
    connector-framework.jar
    Also checked the contents of connector-framework-internal.jar, which has the package org.identityconnectors.framework.impl.serializer but the class org.identityconnectors.framework.impl.serializer.ObjectSerializer is not present in the package.
    As the connector was working prior to my patch upgrade, and not working now.
    Any help on the above is appreciated.

    Hi Experts,
    Tried all APIs mentioned, but the problem persists with the same error.
    To give a brief about our configuration.our process form has 13 attributes and in the lookup-provattribute map we have set __UID__ to userlogin of our process form.So when we try to update a filed for eample firstname the adapter is getting triggered but we are getting the error as mentioned in the post.
    We are using the public java.lang.String oracle.iam.connectors.icfcommon.prov.ICProvisioningManager.updateAttributeValue(java.lang.String,java.lang.String) method in our update adapter.
    So just wanted to understand that do we have to set __NAME__ attribute also in the lookup and map it to some process form label?
    Did anyone tried doing similar thing and has solved the issue similar to what we are experiencing? Your valuable inputs are highly appreciated.
    Thanks
    Edited by: 962322 on Nov 21, 2012 3:08 AM

  • Java ME 8 Permission check failed when opening a serial port

    I have a larger Jave ME8.1 application that was going well until I tried to add one last piece, reading and writing data from a serial port. This was left to last because it is trivial, at least in most programming languages. The is IDE NetBeans 8.0.2 running on a Windows 7 PC. The platform is a Raspberry Pi B or B+ (I have tried both) with the most current Raspbian (12/24/2014 I believe). To simplify the process I created a new app with just the open and close code and this generates the same error I am experiencing in the larger application. The program is as follows:
    package javamecomapp;
    import java.io.IOException;
    import java.io.InputStream;
    import java.io.OutputStream;
    import java.util.logging.Level;
    import java.util.logging.Logger;
    import javax.microedition.io.CommConnection;
    import javax.microedition.io.Connector;
    import javax.microedition.midlet.MIDlet;
    * @author ****
    public class JavaMEcomApp extends MIDlet {
        static int BAUD_RATE = 38400;
        static String SERIAL_DEVICE = "ttyAMA0";
        static CommConnection commConnection = null;
        static OutputStream os = null;
        static InputStream is = null;
        static String connectorString;
        private int rtnValue = -1;
        @Override
        public void startApp() {
            java.lang.System.out.println("Opening comm port.");
            try {
                rtnValue = JavaMEcomApp.openComm();
            } catch (IOException ex) {
                Logger.getLogger(JavaMEcomApp.class.getName()).log(Level.SEVERE, null, ex);
        @Override
        public void destroyApp(boolean unconditional) {
            java.lang.System.out.println("Closing comm port.");
            try {
                rtnValue = JavaMEcomApp.closeComm();
            } catch (IOException ex) {
                Logger.getLogger(JavaMEcomApp.class.getName()).log(Level.SEVERE, null, ex);
            private static int openComm()throws IOException {
                java.lang.System.out.println("Opening comm port.");
                connectorString = "comm:" + SERIAL_DEVICE + ";baudrate=" + BAUD_RATE;
                commConnection = (CommConnection)Connector.open(connectorString);
                is  = commConnection.openInputStream();
                os = commConnection.openOutputStream();
            return 0;
        private static int closeComm()throws IOException {
            java.lang.System.out.println("Closing comm port.");
                is.close();
                os.close();
                commConnection.close();
            return 0;
    If I comment out the JavaMEcomApp.openComm and closeComm lines it runs fine. When they are included, the following error is dumped to the Raspberry Pi terminal:
    Opening comm port.
    Opening comm port.
    [CRITICAL] [SECURITY] iso=2:Permission check failed: javax.microedition.io.CommProtocolPermission "comm:ttyAMA0;baudrate=38400" ""
    TRACE: <at java.security.AccessControlException: >, startApp threw an Exception
    java.security.AccessControlException:
    - com/oracle/meep/security/AccessControllerInternal.checkPermission(), bci=118
    - java/security/AccessController.checkPermission(), bci=1
    - com/sun/midp/io/j2me/comm/Protocol.checkForPermission(), bci=16
    - com/sun/midp/io/j2me/comm/Protocol.openPrim(), bci=31
    - javax/microedition/io/Connector.open(), bci=77
    - javax/microedition/io/Connector.open(), bci=6
    - javax/microedition/io/Connector.open(), bci=3
    - javamecomapp/JavaMEcomApp.openComm(), bci=46
    - javamecomapp/JavaMEcomApp.startApp(), bci=9
    - javax/microedition/midlet/MIDletTunnelImpl.callStartApp(), bci=1
    - com/sun/midp/midlet/MIDletPeer.startApp(), bci=5
    - com/sun/midp/midlet/MIDletStateHandler.startSuite(), bci=246
    - com/sun/midp/main/AbstractMIDletSuiteLoader.startSuite(), bci=38
    - com/sun/midp/main/CldcMIDletSuiteLoader.startSuite(), bci=5
    - com/sun/midp/main/AbstractMIDletSuiteLoader.runMIDletSuite(), bci=130
    - com/sun/midp/main/AppIsolateMIDletSuiteLoader.main(), bci=26
    java.security.AccessControlException:
    - com/oracle/meep/security/AccessControllerInternal.checkPermission(), bci=118
    - java/security/AccessController.checkPermission(), bci=1
    - com/sun/midp/io/j2me/comm/Protocol.checkForPermission(), bci=16
    - com/sun/midp/io/j2me/comm/Protocol.openPrim(), bci=31
    - javax/microedition/io/Connector.open(), bci=77
    - javax/microedition/io/Connector.open(), bci=6
    - javax/microedition/io/Connector.open(), bci=3
    - javamecomapp/JavaMEcomApp.openComm(), bci=46
    - javamecomapp/JavaMEcomApp.startApp(), bci=9
    - javax/microedition/midlet/MIDletTunnelImpl.callStartApp(), bci=1
    - com/sun/midp/midlet/MIDletPeer.startApp(), bci=5
    - com/sun/midp/midlet/MIDletStateHandler.startSuite(), bci=246
    - com/sun/midp/main/AbstractMIDletSuiteLoader.startSuite(), bci=38
    - com/sun/midp/main/CldcMIDletSuiteLoader.startSuite(), bci=5
    - com/sun/midp/main/AbstractMIDletSuiteLoader.runMIDletSuite(), bci=130
    - com/sun/midp/main/AppIsolateMIDletSuiteLoader.main(), bci=26
    Closing comm port.
    Closing comm port.
    TRACE: <at java.lang.NullPointerException>, destroyApp threw an Exception
    java.lang.NullPointerException
    - javamecomapp/JavaMEcomApp.closeComm(), bci=11
    - javamecomapp/JavaMEcomApp.destroyApp(), bci=9
    - javax/microedition/midlet/MIDletTunnelImpl.callDestroyApp(), bci=2
    - com/sun/midp/midlet/MIDletPeer.destroyApp(), bci=6
    - com/sun/midp/midlet/MIDletStateHandler.startSuite(), bci=376
    - com/sun/midp/main/AbstractMIDletSuiteLoader.startSuite(), bci=38
    - com/sun/midp/main/CldcMIDletSuiteLoader.startSuite(), bci=5
    - com/sun/midp/main/AbstractMIDletSuiteLoader.runMIDletSuite(), bci=130
    - com/sun/midp/main/AppIsolateMIDletSuiteLoader.main(), bci=26
    java.lang.NullPointerException
    - javamecomapp/JavaMEcomApp.closeComm(), bci=11
    - javamecomapp/JavaMEcomApp.destroyApp(), bci=9
    - javax/microedition/midlet/MIDletTunnelImpl.callDestroyApp(), bci=2
    - com/sun/midp/midlet/MIDletPeer.destroyApp(), bci=6
    - com/sun/midp/midlet/MIDletStateHandler.startSuite(), bci=376
    - com/sun/midp/main/AbstractMIDletSuiteLoader.startSuite(), bci=38
    - com/sun/midp/main/CldcMIDletSuiteLoader.startSuite(), bci=5
    - com/sun/midp/main/AbstractMIDletSuiteLoader.runMIDletSuite(), bci=130
    com/sun/midp/main/AppIsolateMIDletSuiteLoader.main(), bci=26
    I have tried this with three different serial ports, /dev/ttyAMA0 (yes I did disable the OS from using it), an arduino board /dev/ttyACM0, and a USB to RS485 adaptor /dev/ttyUSB0. All of these ports could be connected and use normally with both a C program and terminal program in the Pi. The API Permissions were set in the project properties / Application Descriptor / API Permissions to jdk.dio.DeviceMgmtPermission "/dev/ttyAMA0". This of course was changed as I tested different devices.
    I found a reference suggesting adding the line "authentication.provider = com.oracle.meep.security.NullAuthenticationProvider" to the end of the jwc_properties.ini file. This had no effect. I found references that during development in eclipse and NetBeans, the app is already elevated to the top level so this should not be an issue until deployment. This does not appear to be the case.
    I am out of time and need a solution quickly. Any suggestions are welcome.

    Terrence,
       Thank you for responding and confirming the issues I'm having with static addressing.  As far as the example above, I do have the standard LEDs working correctly, however, the example I'm referring to above is from the JavaME samples using the GPIO Port for the LEDS, according to the Device I/O Preconfigured List you referenced:
    GPIO Ports
    The following GPIO ports are preconfigured.
    Devicel ID
    Device Name
    Mapped
    Configuration
    8
    LEDS
    PTB22
    PTE26
    PTB21
    direction = 1 (Output only)
    initValue = 0
    GPIOPins:
    controllerNumber = 1
    pinNumber = 22
    mode = 4 (Push-pull mode)
    controllerNumber = 4
    pinNumber = 26
    mode = 4 (Push-pull mode)
    controllerNumber = 1
    pinNumber = 21
    mode = 4 (Push-pull mode)
    So is the assumption that using GPIOPort for accessing the GPIO port for Device ID 8 as listed in the Device I/O Preconfigured list not supported?

  • Getting p2pp burn boot check failed.Windows 7 not getting started Recovery restpration completion

                                          PLEASE HELP
    Hi,I am trying to do factory recovery of my HP DV6 laptop using original HP recovery disc.My laptop has Windows 7 (64Bit) OS. I am doing below steps
    By pressing F10,entering BIOS mode, changed the BOOT ORDER to take from CD ROM and saved the setting with F10 option and then forced shutdown the laptop and inserted the 1st recovery disc
    Restarted the laptop and it started to boot and it asked for all 3 recovery disc + Supplement driver disc. All were successfully copied and installed. Laptop was restared multple times as it was being displayed on screen. I could see there were Softwares being installed, Service setting,registry setting being applied to computer and after around 1 to 1.5 hours laptop gets shut down, so i beleive all installations have been [performed.
    Now when i start my  laptop, it says
    Windows was not shut down properly choose an option and when i seelct either of the 4 options like Satrt normally or Start with command prompt or Start with Networking...everytime it takes me to screen giving belwow message:
    SAVE LOGS        DETAILS      RETRY
    In Details logs, i see below as the message
    P2PP BURNBOOT Check failed
    Possible causes:
    1. Yellow-Bang occured at device manager
    2. Some silent-install failure of applications
    3. Found failed at PININST_BBV
    4. Found failed at PININST_BBV2
    5. Found memory Dump file
    Suggestion:
    1. Checking REGDEV_BB.log for drivers
    2. Checking BBApps.log for applications
    3. Checking MEMDUMP_BBV.log for memory dump file
    1. After reading forums queries, I tried to set BIOS default by pressing F9 andf then pressing F10 ,saving and exit and then tried to reinstall everything but i get the same error.
    2.I tried changing the system date from March 18th 2015  to March 18th 2012 and then do recovery , but still getting same error
    3. I pressed F2 and did all Memory and Hard disk test and it all passed.
    So Can anyone please help me whats the issue. Thanks

    Hi there @VKD1 
    Welcome to the HP Support Forums! It is a great place to find the help you need, both from other users, HP experts and other support personnel.
    I understand that your notebook is not starting after a system recovery, with a "Windows was not shutdown properly" error message.
     See if you can run the startup repair:
    How to Run a Startup Repair in Windows 7 - sevenforums.com
    Malygris1
    I work on behalf of HP
    Please click Accept as Solution if you feel my post solved your issue, it will help others find the solution.
    Click Kudos Thumbs Up on the right to say “Thanks” for helping!

  • Oracle License Checking Failing!

    Hi,
    I am trying to get the JDBC driver for Oracle working but
    am getting the following error when I try to use the dbping
    utility provided:
    java.sql.SQLException: Fail to load jDriver/Oracle due to license checking failed!
    There is a copy of the WebLogicLicense.xml file in my WL_HOME directory.
    I'm sure I must be missing something really silly - please help!
    Thanks!
    Sukhy
    [att1.html]

    i have the same problem as the exception u specify. Seems like the dbping cannot
    find the license althought i include the path to the xml to my classpath. I am
    using Weblogic6.1. Anyone has the idea?
    ray
    "Utpal" <[email protected]> wrote:
    >
    >
    Please include the path to the WebLogicLicense.xml in your classpath.
    -Utpal
    "Sukhy Gosal" <[email protected]> wrote in message =
    news:[email protected]...
    Hi,=20
    I am trying to get the JDBC driver for Oracle working but=20
    am getting the following error when I try to use the dbping=20
    utility provided:=20
    java.sql.SQLException: Fail to load jDriver/Oracle due to license =
    checking failed!=20
    There is a copy of the WebLogicLicense.xml file in my WL_HOME =
    directory.=20
    I'm sure I must be missing something really silly - please help!=20
    Thanks!=20
    Sukhy=20
    <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
    <HTML><HEAD>
    <META http-equiv=3DContent-Type content=3D"text/html; =
    charset=3Diso-8859-1">
    <META content=3D"MSHTML 5.50.4522.1800" name=3DGENERATOR>
    <STYLE></STYLE>
    </HEAD>
    <BODY bgColor=3D#ffffff>
    <DIV><FONT face=3DArial size=3D2>Please include the path to the <FONT=20
    face=3D"Times New Roman" size=3D3>WebLogicLicense.xml  in your=20
    classpath.</FONT></FONT></DIV>
    <DIV>-Utpal</DIV>
    <BLOCKQUOTE dir=3Dltr=20
    style=3D"PADDING-RIGHT: 0px; PADDING-LEFT: 5px; MARGIN-LEFT: 5px; =
    BORDER-LEFT: #000000 2px solid; MARGIN-RIGHT: 0px">
    <DIV>"Sukhy Gosal" <<A=20
    href=3D"mailto:[email protected]">[email protected]</A>> wrote
    in =
    message=20
    <A=20
    =
    href=3D"news:[email protected]">news:3BAF659C.ADA3BF3B@digita=
    s.com</A>...</DIV>Hi,=20
    <BR>I am trying to get the JDBC driver for Oracle working but <BR>am
    =
    getting=20
    the following error when I try to use the dbping <BR>utility provided:
    =
    <P><B>java.sql.SQLException: Fail to load jDriver/Oracle due to =
    license=20
    checking failed!</B>=20
    <P>There is a copy of the WebLogicLicense.xml file in my WL_HOME =
    directory.=20
    <P>I'm sure I must be missing something really silly - please help!=20
    <BR><BR><BR><BR>Thanks! <BR>Sukhy </P></BLOCKQUOTE></BODY></HTML>

  • PRVF-4007 : User equivalence check failed for user "grid"

    Oracle Version 11.2.0.3.0 patched to 11.2.0.3.1
    I had installed GIU and RAC db on 2 node cluster
    but since yesterdy has this issue when running the commands
    [grid@vmorarac2 ~]$ cluvfy comp ocr -n all -verbose
    Verifying OCR integrity
    ERROR:
    PRVF-4008 : User equivalence unavailable on all the specified nodes
    Verification cannot proceed
    vmorarac1 and vmorarac2 r the two nodes
    as a grid user, from vmorarac1 i ran ssh vmorarac2 and it failed with above error and vice versa
    so i did the following
    from vmorarac1
    ssh vmorarac2
    yes and enter key
    exec /usr/bin/ssh-agent $SHELL
    /usr/bin/ssh-add
    i did the same on other node
    but the problem still exists.
    Edited by: 912919 on 23-May-2012 06:58

    Hi,
    The subject of thread "*PRVF-4007 : User equivalence check failed for user "grid*"
    Now..
    {code}
    PRVF-4657 : Name resolution setup check for "vmorarac-scan.pbi.global.pvt" (IP address: 152.144.199.201) failed
    PRVF-4664 : Found inconsistent name resolution entries for SCAN name "vmorarac-scan.pbi.global.pvt"
    PRVF-4664 : Found inconsistent name resolution entries for SCAN name "vmorarac-scan.pbi.global.pvt"
    {code}
    If you can see are different issue.
    If you are using DSN to resolve hostname "vmorarac-scan.pbi.global.pvt" check with "nslookup" if the name is resolved correctly.
    If you are using Hosts File to resolve hostname "vmorarac-scan.pbi.global.pvt" you must configure only one ip (152.144.199.201) to resolve "vmorarac-scan.pbi.global.pvt" this entrie must be on host file of all nodes of cluster.
    Levi Pereira

  • Maintenance Optimizer - ABAP queue check failed

    Hi guys.
    When running Maintenance Optimizer for a SAP ERP 6.0 EHP4 which has SAP HR 604 installed, the following message is displayed
    ABAP queue check failed
    Error       The Installation/Upgrade Package for Add-on SAP_HR rel. 600 is not available.
    The goal is update toEHP7 SP2.
    Not sure why is stating SAP_HR rel. 600 package is not available when the system has 604 release.
    During packages selection, Human Capital Management is checked and on the source system is listed SAP_HR 604 SP12
    The Solution Manager version is 7.1 SPS8, CR Content is updated to 9.9.
    Also have applied SAP Note 1277035 Recommendations, releted to EHP4 Missing, but still no luck
    This seems to be similar when SUM perform an EHP Inclusion, and cannot find the packages on the EPS Directory, but this is happening on the Maintenance Optimizer.
    Is this a problem with SAP Backbone, or should i do some manual corrections on the system or Solution Manager to make it aware SAP_HR is at al level 604 SP 12?
    Thanks!

    Hello,
    That type of error you describe is mostly always related to an issue in the SMSY/LMDB definition.
    It is likely that the issue is a wrong product instance was assigned to the system.
    In LMDB it is easy to check , even without verification checks (which sometimes may be the root cause, a bad verification check happens sometimes). If you go to the product system, then open the node technical systems -> AS ABAP -> Software, and go to the product instance tab (in SP10, product instance - details), when you select a given product instance , you see whether the software components that are part of it are installed or not (there is a frame in the lower part of the screen that shows the software components with flag 'installed' ticked or not). Chances are, one or more of the instances have few or no software components installed.
    Mind you, you must keep at least one SAP ERP 6.0 product instance assigned, this would be the exception to the rule, but if you have an EHP4 for SAP ERP 6.0 system, it should be only one.
    Best regards,
    Miguel Ariño

  • Enable icfcommon logging for ICF Connector ??

    I have a custom icf-connector and some of the logs are not showing up. In my code I used ODJ loggers and those work fine, but the out of the box icfcommon loggers are not working.
    Someone has used the same custom connector and was able to get the icfcommon logs to work. When I checked that environment, I did not see any icfcommon log handers defined in the logging.xml.
    Here are some icfcommon logs I see in the oim_server1.out for that environment:
    Thread Id: 108
    Time: 2013-07-11 14:53:17.057
    Class: oracle.iam.connectors.icfcommon.service.oim9.OIM9Configuration
    Method: getLookupMap
    Level: OK
    Message: Enter: Lookup.DatabaseTable.UM.ReconAttrMap
    Thread Id: 108
    Time: 2013-07-11 14:53:17.066
    Class: oracle.iam.connectors.icfcommon.service.oim9.OIM9Configuration
    Method: getLookupMap
    Level: OK
    Message: Return
    Thread Id: 108
    Time: 2013-07-11 14:53:17.095
    Class: oracle.iam.connectors.icfcommon.recon.SearchReconTask
    Method: handle
    Level: INFO
    Message: Object with UID [534] ignored, contains no changes
    Thread Id: 108
    Time: 2013-07-11 14:53:17.096
    Class: oracle.iam.connectors.icfcommon.recon.SearchReconTask
    Method: handle
    Level: OK
    Message: Handling object with UID [535]
    Are there any configuration I need to make in order for me to see the icfcommon logs in my environment??

    I had a similar issue. In a clean OIM environment with no patches, I was getting logs. After patching OIM to Bundle Patch 3, the logs does not appear.
    It probably a bug.

  • Errors when installing os x 10.4- Invalid sibling link-volume check failed

    I am trying to upgrade my PowerPC G4 from OS x 10.2.8 to 10.4. I bought the Tiger Retail DVD. The install disc boots and runs, but near the end of installation, I get " Error Installing Software- Please try installing again" I opened the disk utility on the installer and tried first aid- which failed.
    "Invalid Sibling Link"
    "Volume check failed"
    Error: The underlying task reported failure on exit
    HFS volume checked
    1 volume could not be repaired because of an error
    I tried first aid repeatedly with no luck. I tried the fsck repair suggested in one of the discussion forums "fsck -fy"
    again, "volume check failed"
    I'm looking to buy DiskWarrior as suggested, but am curious about another option I've read about. I found a macosxhints website http://www.macosxhints.com/article.php?story=20070204093925888
    that suggests running the install disk, clicking on terminal and unmounting the drive that the OS system "lives on". If I do this, will I lose the data already on my computer? The hint does not talk about how to remount the drive.
    I am NOT computer savvy. Would this be getting in over my head? It sounds simple enough, but I don't want to do any irreversible damage. Should I just buy disk warrior, or would the safest bet be to buy an external hard drive and back up everything, then perform an erase an install with Tiger?
    Any help would be MUCH appreciated!!!

    Hi confused- I just fixed my invalid sibling link error on my HD. Stop using your HD!
    It will only get radically worse. I went to http://www.alsoft.com/DiskWarrior/ to see which version of Disk Warrior I needed which was 4.1. Then I went to a retail store & bought the CD $99. On the box, it must say 4.0 with 4.1 included + CD rev 42. You must use the CD as the download won't boot up your HD. In the box is a 1-pager of simple instructions. Put disk in CD drive, follow instructions, click rebuild HD, wait 15 minutes. My HD is perfect!!!now. It fixed permissions, rebuilt my directory, fixed my CS3 Adobe applications, fixed corrupted prefs, found & restored ALL files. I lost nothing! Plus made a PDF detailed report. Best $99 i ever spent. Good luck!

  • Online certificate check failed

    I downloaded viber a while ago on my nokia 5230 and it was working perfectly. Recently when I opened viber on my phone I received a message saying that there s a new version of viber available on ovi store that I should get. Which I did. But when updating viber my phone says online certificate check failed. And the installation stops there. What does that mean? Can someone please help? This is highly frustrating. Almost smashed my phone because of that. Please help.
    Solved!
    Go to Solution.

    Tasha0190 wrote:
    I received a message saying that there s a new version of viber available on ovi store that I should get. Which I did.
    I guess, you used this item.
    Although scoobyman’s answer solves this issue, it opens up your Nokia to viruses and other bad applications. Signing makes sure, the author of the app is the one he claims to be. Signing makes the author responsible for what he does. If an author does something bad, his certificates gets revoked. OCSP makes sure, the signature is still good. Therefore, revert these two settings, after you installed an app you are trusting.
    Furthermore, an application from the Nokia Store should work with any setting. Any error or warning message is not acceptable and should be forwarded to the Nokia Store team for further analysis.
    a) Menu » Settings » Installations » Installations settings » Software installation
    The state of this item does not matter because Viber is signed correctly. Therefore, ‘Signed only’ works for Viber and is recommend.
    b) Menu » Settings » Installations » Installations settings » Online certificate check (OCSP)
    The state of this item does matter. Therefore, please, set is at least to ‘On’. In Wireshark, I checked that the certificate is not revoked but good. Therefore, I have no idea what is wrong here. It this not normal.
    Conclusion:
    Set ‘Online certificate check’ from ‘must be passed’ to ‘On’. If you still get the installation security warning ‘Unable to verify supplier’, report this to the Nokia Store team for further investigation.
    Change ‘Software installation’ from to ‘off’ only when you are absolutely trusting that app. Revert ‘Software installation’ to ‘signed only’ after the installation of that single particular app.

  • [solved] Filesystem check fail - Cannot access LVM Logical Volumes

    I am getting a "File System Check Failed"on startup, I recently did a full system upgrade but I'm not entirely sure that the cause of the issue as I don't reboot very often.
    I get the error right before this line is echo'ed out:
    /dev/mapper/Arch_LVM-Root:
    The super block could not be read or does not describe a correct ext2 filesystem...
    this is odd because the only ext2 filesystem I have is on an non-LVM boot partition...
    I can log-in and mount / as read/write and I can activate LVM with
    modprobe dm-mod
    and
    vgchange -ay Arch_LVM
    and they show up in lvdisplay but their status is "NOT available"
    I just need to mount these logical volumes so I can retrieve some personal data in my home directory, I am also hesitant to use LVM again if I can't retrieve my data.
    any suggestions?
    Last edited by action_owl (2010-08-15 02:15:58)

    I just popped in the install disk and was able to mount and access the LVM groups as expected, something must have been wonky with my filesystem

  • [SOLVED] Filesystem check failed on LVM partition...

    My server experienced a power outage last night, and I noticed today that it wasn't booting correctly. Apparently, the storage partition, /dev/mapper/VolGroup00-lvolstorage fails the filesystem check. The first time I booted into it, I ran a manual check and answered (y) to the questions. Now when booting, it quickly displays a bunch of numbers and that scrolls for a little while (goes too fast to understand what they are...). Then it says:
    ####Filesystem Check Failed####
    Please repair manually, blah blah blah
    I'm not really sure what to do. Running fsck does the whole numbers scrolling across the screen thing again, finally asking if I want to clone multiply-claimed blocks... =/ I don't want to answer yes anymore until I get someone's input. <_<
    EDIT: It said before that there are 81 inodes containing multiply-claimed blocks... Then it said a certain large file (inode #20) has 27818 multiply-claimed blocks shared with 24 other files, and it then lists other files.
    Last edited by XtrmGmr99 (2010-05-21 14:14:25)

    It went ahead and fixed the filesystem, however some of the files (music files mostly) are corrupted and won't play. I have backups of those so it's no big lose, as long as the file system works now.

  • Custom ICF Connector; Upd ChildTableValues (delete) not sending group name

    I'm building a custom ICF connector implemented as a .Net bundle. The UpdateChildTableValues operations for add and update on groups pass the group name into the code but the delete does not. There doesn't seem to be any indication of the operation itself for groups so it seems the developer must handle target group memberships in the code, which is fine. However, if a delete doesn't send the group name, how do I know what group to even look for? It seems this also adds the burden of API calls against OIM to identify the user's group assignments as well as a target search to compare and decide what was removed.
    OIM 11gR1 on RHEL 6.4 (64-bit)
    11.1.2 Connector Server on Win2K8R2 (64-bit)
    .Net 4 target build/C# 4.0

    You have 2 options when working with child tables:
    1) use ICProvisioningManager#updateChildTableValues -- this provides the list of values after the update, so let's say the attribute has values group1, group2, group3, you remove group1 then the UpdateOp#update with attribute values group2, group3 will be called on your connector (so in this case your connector needs to implement UpdateOp). You are right, it might mean additional target API call in your connector (depending on target).
    2) use ICProvisioningManager#addChildTableValue, ICProvisioningManager#updateChildTableValue, ICProvisioningManager#removeChildTableValue -- when you use these methods, the connector will get the changes only. In the previous example UpdateAttributeValuesOp#removeAttributeValues with value group1 would be called on your connector. You connector needs to implement UpdateAttributeValuesOp to make this work.
    Tomas

  • Shared storage check failed on nodes

    hi friends,
    I am installing rac 10g on vmware and os is OEL4.i completed all the prerequisites but when i run the below command
    ./runclufy stage -post hwos -n rac1,rac2, i am facing below error.
    node connectivity check failed.
    Checking shared storage accessibility...
    WARNING:
    Unable to determine the sharedness of /dev/sde on nodes:
    rac2,rac2,rac2,rac2,rac2,rac1,rac1,rac1,rac1,rac1
    Shared storage check failed on nodes "rac2,rac1"
    please help me anyone ,it's urgent
    Thanks,
    poorna.
    Edited by: 958010 on 3 Oct, 2012 9:47 PM

    Hello,
    It seems that your storage is not accessible from both the nodes. If you want you can follow these steps to configure 10g RAC on VMware.
    Steps to configure Two Node 10 RAC on RHEL-4
    Remark-1: H/W requirement for RAC
    a) 4 Machines
    1. Node1
    2. Node2
    3. storage
    4. Grid Control
    b) 2 switchs
    c) 6 straight cables
    Remark-2: S/W requirement for RAC
    a) 10g cluserware
    b) 10g database
    Both must have the same version like (10.2.0.1.0)
    Remark-3: RPMs requirement for RAC
    a) all 10g rpms (Better to use RHEL-4 and choose everything option to install all the rpms)
    b) 4 new rpms are required for installations
    1. compat-gcc-7.3-2.96.128.i386.rpm
    2. compat-gcc-c++-7.3-2.96.128.i386.rpm
    3. compat-libstdc++-7.3-2.96.128.i386.rpm
    4. compat-libstdc++-devel-7.3-2.96.128.i386.rpm
    ------------ Start Machine Preparation --------------------
    1. Prepare 3 machines
    i. node1.oracle.com
    etho (192.9.201.183) - for public network
    eht1 (10.0.0.1) - for private n/w
    gateway (192.9.201.1)
    subnet (255.255.255.0)
    ii. node2.oracle.com
    etho (192.9.201.187) - for public network
    eht1 (10.0.0.2) - for private n/w
    gateway (192.9.201.1)
    subnet (255.255.255.0)
    iii. openfiler.oracle.com
    etho (192.9.201.182) - for public network
    gateway (192.9.201.1)
    subnet (255.255.255.0)
    NOTE:-
    -- Here eth0 of all the nodes should be connected by Public N/W using SWITCH-1
    -- eth1 of all the nodes should be connected by Private N/W using SWITCH-2
    2. network Configuration
    #vim /etc/host
    192.9.201.183 node1.oracle.com node1
    192.9.201.187 node2.oracle.com node2
    192.9.201.182 openfiler.oracle.com openfiler
    10.0.0.1 node1-priv.oracle.com node1
    10.0.0.2 node2-priv.oracle.com node2-priv
    192.9.201.184 node1-vip.oracle.com node1-vip
    192.9.201.188 node2-vip.oracle.com node2-vip
    2. Prepare Both the nodes for installation
    a. Set Kernel Parameters (/etc/sysctl.conf)
    kernel.shmall = 2097152
    kernel.shmmax = 2147483648
    kernel.shmmni = 4096
    kernel.sem = 250 32000 100 128
    fs.file-max = 65536
    net.ipv4.ip_local_port_range = 1024 65000
    net.core.rmem_default = 262144
    net.core.rmem_max = 262144
    net.core.wmem_default = 262144
    net.core.wmem_max = 262144
    b. Configure /etc/security/limits.conf file
    oracle soft nproc 2047
    oracle hard nproc 16384
    oracle soft nofile 1024
    oracle hard nofile 65536
    c. Configure /etc/pam.d/login file
    session required /lib/security/pam_limits.so
    d. Create user and groups on both nodes
    # groupadd oinstall
    # groupadd dba
    # groupadd oper
    # useradd -g oinstall -G dba oracle
    # passwd oracle
    e. Create required directories and set the ownership and permission.
    # mkdir –p /u01/crs1020
    # mkdir –p /u01/app/oracle/product/10.2.0/asm
    # mkdir –p /u01/app/oracle/product/10.2.0/db_1
    # chown –R oracle:oinstall /u01/
    # chmod –R 755 /u01/
    f. Set the environment variables
    $ vi .bash_profile
    ORACLE_BASE=/u01/app/oracle/; export ORACLE_BASE
    ORA_CRS_HOME=/u01/crs1020; export ORA_CRS_HOME
    #LD_ASSUME_KERNEL=2.4.19; export LD_ASSUME_KERNEL
    #LANG=”en_US”; export LANG
    3. storage configuration
    PART-A Open-filer Set-up
    Install openfiler on a machine (Leave 60GB free space on the hdd)
    a) Login to root user
    b) Start iSCSI target service
    # service iscsi-target start
    # chkconfig –level 345 iscsi-target on
    PART –B Configuring Storage on openfiler
    a) From any client machine open the browser and access openfiler console (446 ports).
    https://192.9.201.182:446/
    b) Open system tab and update the local N/W configuration for both nodes with netmask (255.255.255.255).
    c) From the Volume tab click "create a new physical volume group".
    d) From "block Device managemrnt" click on "(/dev/sda)" option under 'edit disk' option.
    e) Under "Create a partition in /dev/sda" section create physical Volume with full size and then click on 'CREATE'.
    f) Then go to the "Volume Section" on the right hand side tab and then click on "Volume groups"
    g) Then under the "Create a new Volume Group" specify the name of the volume group (ex- racvgrp) and click on the check box and then click on "Add Volume Group".
    h) Then go to the "Volume Section" on the right hand side tab and then click on "Add Volumes" and then specify the Volume name (ex- racvol1) and use all space and specify the "Filesytem/Volume type" as ISCSI and then click on CREATE.
    i) Then go to the "Volume Section" on the right hand side tab and then click on "iSCSI Targets" and then click on ADD button to add your Target IQN.
    j) then goto the 'LUN Mapping" and click on "MAP".
    k) then goto the "Network ACL" and allow both node from there and click on UPDATE.
    Note:- To create multiple volumes with openfiler we need to use Multipathing that is quite complex that’s why here we are going for a single volume. Edit the property of each volume and change access to allow.
    f) install iscsi-initiator rpm on both nodes to acces iscsi disk
    #rpm -ivh iscsi-initiator-utils-----------
    g) Make entry in iscsi.conf file about openfiler on both nodes.
    #vim /etc/iscsi.conf (in RHEL-4)
    and in this file you will get a line "#DiscoveryAddress=192.168.1.2" remove comment and specify your storage ip address here.
    OR
    #vim /etc/iscsi/iscsi.conf (in RHEL-5)
    and in this file you will get a line "#ins.address = 192.168.1.2" remove comment and specify your storage ip address here.
    g) #service iscsi restart (on both nodes)
    h) From both Nodes fire this command to access volume of openfiler-
    # iscsiadm -m discovery -t sendtargets -p 192.2.201.182
    i) #service iscsi restart (on both nodes)
    j) #chkconfig –level 345 iscsi on (on both nodes)
    k) make the partition 3 primary and 1 extended and within extended make 11 logical partition
    A. Prepare partitions
    1. #fdisk /dev/sdb
    :e (extended)
    Part No. 1
    First Cylinder:
    Last Cylinder:
    :p
    :n
    :l
    First Cylinder:
    Last Cylinder: +1024M
    2. Note the /dev/sdb* names.
    3. #partprobe
    4. Login as root user on node2 and run partprobe
    B. On node1 login as root user and create following raw devices
    # raw /dev/raw/raw5 /dev/sdb5
    #raw /dev/raw/taw6 /dev/sdb6
    # raw /dev/raw/raw12 /dev/sdb12
    Run ls –l /dev/sdb* and ls –l /dev/raw/raw* to confirm the above
    -Repeat the same thing on node2
    C. On node1 as root user
    # vi .etc/sysconfig/rawdevices
    /dev/raw/raw5 /dev/sdb5
    /dev/raw/raw6 /dev/sdb6
    /dev/raw/raw7 /dev/sdb7
    /dev/raw/raw8 /dev/sdb8
    /dev/raw/raw9 /dev/sdb9
    /dev/raw/raw10 /dev/sdb10
    /dev/raw/raw11 /dev/sdb11
    /dev/raw/raw12 /dev/sdb12
    /dev/raw/raw13 /dev/sdb13
    /dev/raw/raw14 /dev/sdb14
    /dev/raw/raw15 /dev/sdb15
    D. Restart the raw service (# service rawdevices restart)
    #service rawdevices restart
    Assigning devices:
    /dev/raw/raw5 --> /dev/sdb5
    /dev/raw/raw5: bound to major 8, minor 21
    /dev/raw/raw6 --> /dev/sdb6
    /dev/raw/raw6: bound to major 8, minor 22
    /dev/raw/raw7 --> /dev/sdb7
    /dev/raw/raw7: bound to major 8, minor 23
    /dev/raw/raw8 --> /dev/sdb8
    /dev/raw/raw8: bound to major 8, minor 24
    /dev/raw/raw9 --> /dev/sdb9
    /dev/raw/raw9: bound to major 8, minor 25
    /dev/raw/raw10 --> /dev/sdb10
    /dev/raw/raw10: bound to major 8, minor 26
    /dev/raw/raw11 --> /dev/sdb11
    /dev/raw/raw11: bound to major 8, minor 27
    /dev/raw/raw12 --> /dev/sdb12
    /dev/raw/raw12: bound to major 8, minor 28
    /dev/raw/raw13 --> /dev/sdb13
    /dev/raw/raw13: bound to major 8, minor 29
    /dev/raw/raw14 --> /dev/sdb14
    /dev/raw/raw14: bound to major 8, minor 30
    /dev/raw/raw15 --> /dev/sdb15
    /dev/raw/raw15: bound to major 8, minor 31
    done
    E. Repeat the same thing on node2 also
    F. To make these partitions accessible to oracle user fire these commands from both Nodes.
    # chown –R oracle:oinstall /dev/raw/raw*
    # chmod –R 755 /dev/raw/raw*
    F. To make these partitions accessible after restart make these entry on both nodes
    # vi /etc/rc.local
    Chown –R oracle:oinstall /dev/raw/raw*
    Chmod –R 755 /dev/raw/raw*
    4. SSH configuration (User quivalence)
    On node1:- $ssh-keygen –t rsa
    $ssh-keygen –t dsa
    On node2:- $ssh-keygen –t rsa
    $ssh-keygen –t dsa
    On node1:- $cd .ssh
    $cat *.pub>>node1
    On node2:- $cd .ssh
    $cat *.pub>>node2
    On node1:- $scp node1 node2:/home/oracle/.ssh
    On node2:- $scp node2 node2:/home/oracle/.ssh
    On node1:- $cat node*>>authowized_keys
    On node2:- $cat node*>>authowized_keys
    Now test the ssh configuration from both nodes
    $ vim a.sh
    ssh node1 hostname
    ssh node2 hostname
    ssh node1-priv hostname
    ssh node2-priv hostname
    $ chmod +x a.sh
    $./a.sh
    first time you'll have to give the password then it never ask for password
    5. To run cluster verifier
    On node1 :-$cd /…/stage…/cluster…/cluvfy
    $./runcluvfy stage –pre crsinst –n node1,node2
    First time it will ask for four New RPMs but remember install these rpms by double clicking because of dependancy. So better to install these rpms in this order (rpm-3, rpm-4, rpm-1, rpm-2)
    1. compat-gcc-7.3-2.96.128.i386.rpm
    2. compat-gcc-c++-7.3-2.96.128.i386.rpm
    3. compat-libstdc++-7.3-2.96.128.i386.rpm
    4. compat-libstdc++-devel-7.3-2.96.128.i386.rpm
    And again run cluvfy and check that "It should given a clean cheat" then start clusterware installation.

  • Custom ICF Connector for Salesforce

    Gurus,
    I am new to the concept of ICF connector.
    I got a new requirement to develop a custom ICF connector for salesforce.
    Are there any samples for salesforce to which u can direct me? A lot is there in google for flatfile which I don't understand how to use it for salesforce. I am looking for salesforce as I am not sure how to implement for it.
    Please help.

    I believe Salesforce exposes Create User/Update User etc Webservices. Just get it confirmed first and try to use OOTB Webservice connector to integrate with Salesforce.
    ~J

Maybe you are looking for