SPNegoLoginModule and Fallback

Hi I have a problem with configuring SPNego on EP6 SP16
The SPNego works fine if configured allone, as a single LoginModule in the stack.
EvaluateTicketLoginModule       (Sufficient)
SPNegoLoginModule               (Requisite)
CreateTicketLoginModule         (Optional)
When I try to add an additional LoginModule to the stack as a fallback solution, the SPNegoLoginModule stops working.
Example:
EvaluateTicketLoginModule       (Sufficient)
SPNegoLoginModule               (Optional)
CreateTicketLoginModule         (Sufficient)
AnonymousLoginModule            (Requisite)
CreateTicketLoginModule         (Optional)
Each of the modules (SPNegoLoginModule/AnonymousLoginModule) works fine, if placed alone in a LoginModuleStack but when putting them together it looks like the SPNegoLoginModule stops reading the Authorization header from the HTTP-Request.
Has anyone managed to put the SPNegoLoginModule together with another LoginModule in one stack?
I have traced the HTTP-Request and everything looks exactly the same but when i am looking into the Portal log, SPNego can not find any user information.
Any and really any hint will be very very helpfull.
Regards
Edmund

Hi Edmund,
Kyle is right.
If the kerberos authentication fail, it will not continue to anonymous method.
The web browser will only show an error page.
You can uncheck "Enable Windows Integrated Authentication" in IE Advanced Option then il will go to Anonymous right from the start.
In my case, I configured the first login with Client Certificate, then to Kerberos.
It look likes this :
EvaluateTicketLoginModule (Sufficient)
ClientCertLoginModule (Optional)
CreateTicketLoginModule (Sufficient)
SPNegoLoginModule (Optional)
CreateTicketLoginModule (Sufficient)
BasicPasswordLoginModule (Requisite)
CertPersisterLoginModule (Optional)
CreateTicketLoginModule (Optional)
I hope it can help you.
Maybe you can use some computer for anonymous access only for visitors.
Brad

Similar Messages

  • Load balancing, failover and fallback in Non-Clustered WebLogic environment

    hi,
    Has anyone implemented WebLogic 10.3.3 (or 10.3.4) in a Non-Clustered environment, but also got load balancing, failover and fallback work?
    We were successful in getting failover working using t3://server1:7001,server2:7002 provider URL, but not load balancing or fallback.
    The fallback is working when it was connecting to server2 and if we kill server2, then it switches to server1, but not when server2 is still running while server1 comes back.
    All we need to find a way to enforce fallback to primary site, even if secondary which the client connected is still up and running and primary site comes back.
    Any help appreciated.
    Thanks.
    Best regards,
    Bala

    hi,
    Has anyone implemented WebLogic 10.3.3 (or 10.3.4) in a Non-Clustered environment, but also got load balancing, failover and fallback work?
    We were successful in getting failover working using t3://server1:7001,server2:7002 provider URL, but not load balancing or fallback.
    The fallback is working when it was connecting to server2 and if we kill server2, then it switches to server1, but not when server2 is still running while server1 comes back.
    All we need to find a way to enforce fallback to primary site, even if secondary which the client connected is still up and running and primary site comes back.
    Any help appreciated.
    Thanks.
    Best regards,
    Bala

  • Preferred Distribution Point and Fallback

    If I have a protected (preferred) distribution point that doesn't have content distributed to it yet and I have fallback enabled will the client then move to another distribution point that has the content even though that one is protected by other boundaries
    which the client is not in?
    So question is can a client fallback to another protected distribution point which is referenced by boundary groups that don't include any boundaries the client is in?

    Basically yes. It's well explained here
    http://blogs.technet.com/b/neilp/archive/2013/01/03/on-demand-content-distribution-fallback-distribution-points-a-2012-configuration-manager-micro-depp-dive.aspx
    Gerry Hampson | Blog:
    www.gerryhampsoncm.blogspot.ie | LinkedIn:
    Gerry Hampson | Twitter:
    @gerryhampson

  • Device Mapper and -fallback.img - stock, ck and beyond

    Hi!
    I have similar findings to superstoned, see at the end of
    http://bbs.archlinux.org/viewtopic.php?t=27696
    device mapper module is not the part of the kernel26*-fallback.img. I understand the fallback images as the ones that contain all modules and if we get into trouble by generating our own cpio image we can point GRUB/LILO to them and be able to fix the problems.
    Unfortunatelly the device mapper is not included in them (I checked the stock, ck and beyond kernels) which can cause problems if somebody installs the "/" on LVM managed disks.
    Would it make sense to include them in the "fallback" image?
    Besides of that I created a custom cpio image that includes the dm_mod.k by putting it into mkinitcpio.conf
    MODULES="sata_nv sd_mod reiserfs dm_mod"
    funny thing is it is working fine for the beyond and ck kernels but the stock one is not loading it although it is there - in the img file. Weird.
    Any comments?
    waldek

    I checked again - custom img file works for all 3 kernels. My mistake - sorry.

  • Problem with xfce and/or mouse and keyboard

    Hi!
    Two days ago I've turned on my laptop with Arch and the display manager Slim has frozen immediately. Later on Single-User mode I've uninstalled it. Now in normal mode when I type "startxfce4" everything is showing up, but the mouse and keyboard don't work. I know it's not frozen, because animation of mouse and CPU Frequency Monitor is on.
    Strange fact is that laptop is not connecting with the Internet, so now I don't know where the problem is.
    It's happening when I start xfce both from user and super user.
    Just before those troubles I've updated everything with "pacman -Syu", but I haven't seen anything strange. Also the computer could have been roughly turned off (by unplugging power supply), when was turning on.
    Thanks,
    Gaba

    Sorry for not replying, but I didn't have access to my laptop by this time.
    On grub I have only normal mode and 'fallback' mode, so I have result from 'journalctl' from normal mode.
    I'm sending the fragment of this file where you can see everything after I typed 'startxfce4' in console:
    http://codepad.org/8pYIj8qC
    I see that something is wrong, but after some attempts I don't know how to fix this ^^
    Edit:
    Oh, I forgot to write, that I've tried activate some drivers from 'lspci -k' (there were no "Kernel driver in use:..." in almost every device), but it didn't fi anything.
    Last edited by gargoyle (2015-03-15 23:20:18)

  • [FIXED] Compiz runs only with emerald and crashes with metacity

    Hey,
    I currently try to use a metacity theme and have compiz enabled but apparently that's something compiz doesn't really like.
    One think i noticed in fusion-icon is that the entire metacity option under: "Select Window Devorator" is just not there.. (or was that always the case?)
    Oke, things i have done so far:
    - Search this forum (obviously) nothing found with a solution
    - Searched the wiki (same results)
    - in gconf-editor /desktop/gnome/session/required_components/windowmanager set to compiz (like said in the Compiz wiki page on archlinux.org
    - disabled metacity as compositing manager (tried with enabled as well but the same results)
    - right now i have the compiz bash script from ubuntu (tweaked a little) and that starts up compiz just fine but not with metacity
    And incase you want the script:
    #!/bin/sh
    # Compiz Manager wrapper script
    # Copyright (c) 2007 Kristian Lyngstøl <[email protected]>
    # This program is free software; you can redistribute it and/or modify
    # it under the terms of the GNU General Public License as published by
    # the Free Software Foundation; either version 2 of the License, or
    # (at your option) any later version.
    # This program is distributed in the hope that it will be useful,
    # but WITHOUT ANY WARRANTY; without even the implied warranty of
    # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
    # GNU General Public License for more details.
    # You should have received a copy of the GNU General Public License
    # along with this program; if not, write to the Free Software
    # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
    # Contributions by: Treviño (3v1n0) <[email protected]>, Ubuntu Packages
    # Much of this code is based on Beryl code, also licensed under the GPL.
    # This script will detect what options we need to pass to compiz to get it
    # started, and start a default plugin and possibly window decorator.
    COMPIZ_BIN_PATH="/usr/bin/" # For window decorators and compiz
    PLUGIN_PATH="/usr/lib/compiz/"
    GLXINFO="/usr/bin/glxinfo"
    KWIN="/usr/bin/kwin"
    METACITY="/usr/bin/metacity"
    XFWM="/usr/bin/xfwm"
    COMPIZ_NAME="compiz" # Final name for compiz (compiz.real)
    # For Xgl LD_PRELOAD
    LIBGL_NVIDIA="/usr/lib/nvidia/libGL.so.1.2.xlibmesa"
    LIBGL_FGLRX="/usr/lib/fglrx/libGL.so.1.2.xlibmesa"
    # Minimum amount of memory (in kilo bytes) that nVidia cards need
    # to be allowed to start
    # Set to 262144 to require 256MB
    NVIDIA_MEMORY="65536" # 64MB
    NVIDIA_SETTINGS="nvidia-settings" # Assume it's in the path by default
    # For detecting what driver is in use, the + is for one or more /'s
    XORG_DRIVER_PATH="/usr/lib/xorg/modules/drivers/+"
    FALLBACKWM="xterm"
    if [ x"$KDE_FULL_SESSION" = x"true" ]; then
    FALLBACKWM="${KWIN}";
    elif [ x"$GNOME_DESKTOP_SESSION_ID" != x"" ]; then
    FALLBACKWM="${METACITY}"
    elif xprop -root _DT_SAVE_MODE | grep ' = \"xfce4\"$' >/dev/null 2>&1; then
    FALLBACKWM="${XFWM}"
    fi
    FALLBACKWM_OPTIONS="--replace $@"
    # Driver whitelist
    WHITELIST="nvidia intel ati radeon i810 fglrx"
    # blacklist based on the pci ids
    # See http://wiki.compiz-fusion.org/Hardware/Blacklist for details
    #T=" 1002:5954 1002:5854 1002:5955" # ati rs480
    #T="$T 1002:4153" # ATI Rv350
    #T="$T 8086:2982 8086:2992 8086:29a2 8086:2a02 8086:2a12" # intel 965
    T="$T 8086:2a02 " # Intel GM965
    T="$T 8086:3577 8086:2562 " # Intel 830MG, 845G (LP: #259385)
    BLACKLIST_PCIIDS="$T"
    unset T
    COMPIZ_OPTIONS="--ignore-desktop-hints --replace"
    COMPIZ_PLUGINS="core"
    ENV=""
    # Use emerald by default if it exist
    USE_EMERALD="yes"
    # No indirect by default
    INDIRECT="no"
    # Default X.org log if xset q doesn't reveal it
    XORG_DEFAULT_LOG="/var/log/Xorg.0.log"
    # Set to yes to enable verbose
    VERBOSE="yes"
    # Echos the arguments if verbose
    verbose()
    if [ "x$VERBOSE" = "xyes" ]; then
    printf "$*"
    fi
    # abort script and run fallback windowmanager
    abort_with_fallback_wm()
    if [ "x$SKIP_CHECKS" = "xyes" ]; then
    verbose "SKIP_CHECKS is yes, so continuing despite problems.\n"
    return 0;
    fi
    if [ "x$CM_DRY" = "xyes" ]; then
    verbose "Dry run failed: Problems detected with 3D support.'n"
    exit 1;
    fi
    verbose "aborting and using fallback: $FALLBACKWM \n"
    if [ -x $FALLBACKWM ]; then
    exec $FALLBACKWM $FALLBACKWM_OPTIONS
    else
    printf "no $FALLBACKWM found, exiting\n"
    exit 1
    fi
    # Check if we run with the Software Rasterizer, this happens e.g.
    # when a second screen session is opened via f-u-a on intel
    check_software_rasterizer()
    verbose "Checking for Software Rasterizer: "
    if glxinfo 2> /dev/null | egrep -q '^OpenGL renderer string: Software Rasterizer' ; then
    verbose "present. \n";
    return 0;
    else
    verbose "Not present. \n"
    return 1;
    fi
    # Check for non power of two texture support
    check_npot_texture()
    verbose "Checking for non power of two support: "
    if glxinfo 2> /dev/null | egrep -q '(GL_ARB_texture_non_power_of_two|GL_NV_texture_rectangle|GL_EXT_texture_rectangle|GL_ARB_texture_rectangle)' ; then
    verbose "present. \n";
    return 0;
    else
    verbose "Not present. \n"
    return 1;
    fi
    # Check for presence of FBConfig
    check_fbconfig()
    verbose "Checking for FBConfig: "
    if [ "$INDIRECT" = "yes" ]; then
    $GLXINFO -i | grep -q GLX.*fbconfig
    FB=$?
    else
    $GLXINFO | grep -q GLX.*fbconfig
    FB=$?
    fi
    if [ $FB = "0" ]; then
    unset FB
    verbose "present. \n"
    return 0;
    else
    unset FB
    verbose "not present. \n"
    return 1;
    fi
    # Check for TFP
    check_tfp()
    verbose "Checking for texture_from_pixmap: "
    if [ $($GLXINFO 2>/dev/null | grep -c GLX_EXT_texture_from_pixmap) -gt 2 ] ; then
    verbose "present. \n"
    return 0;
    else
    verbose "not present. \n"
    if [ "$INDIRECT" = "yes" ]; then
    unset LIBGL_ALWAYS_INDIRECT
    INDIRECT="no"
    return 1;
    else
    verbose "Trying again with indirect rendering:\n";
    INDIRECT="yes"
    export LIBGL_ALWAYS_INDIRECT=1
    check_tfp;
    return $?
    fi
    fi
    # Check wether the composite extension is present
    check_composite()
    verbose "Checking for Composite extension: "
    if xdpyinfo -queryExtensions | grep -q Composite ; then
    verbose "present. \n";
    return 0;
    else
    verbose "not present. \n";
    return 1;
    fi
    # Detects if Xgl is running
    check_xgl()
    verbose "Checking for Xgl: "
    if xvinfo | grep -q Xgl ; then
    verbose "present. \n"
    return 0;
    else
    verbose "not present. \n"
    return 1;
    fi
    # Check if the nVidia card has enough video ram to make sense
    check_nvidia_memory()
    if [ ! -x "$NVIDIA_SETTINGS" ]; then
    return 0
    fi
    MEM=$(${NVIDIA_SETTINGS} -q VideoRam | egrep Attribute\ \'VideoRam\'\ .*: | cut -d: -f3 | sed 's/[^0-9]//g')
    if [ $MEM -lt $NVIDIA_MEMORY ]; then
    verbose "Less than ${NVIDIA_MEMORY}kb of memory and nVidia";
    return 1;
    fi
    return 0;
    # Check for existence if NV-GLX
    check_nvidia()
    if [ ! -z $NVIDIA_INTERNAL_TEST ]; then
    return $NVIDIA_INTERNAL_TEST;
    fi
    verbose "Checking for nVidia: "
    if xdpyinfo | grep -q NV-GLX ; then
    verbose "present. \n"
    NVIDIA_INTERNAL_TEST=0
    return 0;
    else
    verbose "not present. \n"
    NVIDIA_INTERNAL_TEST=1
    return 1;
    fi
    # Check if the max texture size is large enough compared to the resolution
    check_texture_size()
    # Check how many screens we've got and iterate over them
    N=$(xdpyinfo | grep -i "number of screens" | sed 's/.*[^0-9]//g')
    for i in $(seq 1 $N); do
    verbose "Checking screen $i"
    TEXTURE_LIMIT=$(glxinfo -l | grep GL_MAX_TEXTURE_SIZE | sed -n "$i s/^.*=[^0-9]//g p")
    RESOLUTION=$(xdpyinfo | grep -i dimensions: | sed -n -e "$i s/^ *dimensions: *\([0-9]*x[0-9]*\) pixels.*/\1/ p")
    VRES=$(echo $RESOLUTION | sed 's/.*x//')
    HRES=$(echo $RESOLUTION | sed 's/x.*//')
    verbose "Comparing resolution ($RESOLUTION) to maximum 3D texture size ($TEXTURE_LIMIT): ";
    if [ $VRES -gt $TEXTURE_LIMIT ] || [ $HRES -gt $TEXTURE_LIMIT ]; then
    verbose "Failed.\n"
    return 1;
    fi
    verbose "Passed.\n"
    done
    return 0
    # check driver whitelist
    running_under_whitelisted_driver()
    LOG=$(xset q|grep "Log file"|awk '{print $3}')
    if [ "$LOG" = "" ]; then
    verbose "xset q doesn't reveal the location of the log file. Using fallback $XORG_DEFAULT_LOG \n"
    LOG=$XORG_DEFAULT_LOG;
    fi
    if [ -z "$LOG" ];then
    verbose "AIEEEEH, no Log file found \n"
    verbose "$(xset q) \n"
    return 0
    fi
    for DRV in ${WHITELIST}; do
    if egrep -q "Loading ${XORG_DRIVER_PATH}${DRV}_drv\.so" $LOG &&
    ! egrep -q "Unloading ${XORG_DRIVER_PATH}${DRV}_drv\.so" $LOG;
    then
    return 0
    fi
    done
    verbose "No whitelisted driver found\n"
    return 1
    # check pciid blacklist
    have_blacklisted_pciid()
    OUTPUT=$(lspci -n)
    for ID in ${BLACKLIST_PCIIDS}; do
    if echo "$OUTPUT" | egrep -q "$ID"; then
    verbose "Blacklisted PCIID '$ID' found \n"
    return 0
    fi
    done
    OUTPUT=$(lspci -vn | grep -i VGA)
    verbose "Detected PCI ID for VGA: $OUTPUT\n"
    return 1
    build_env()
    if check_nvidia; then
    ENV="__GL_YIELD=NOTHING "
    fi
    if [ "$INDIRECT" = "yes" ]; then
    ENV="$ENV LIBGL_ALWAYS_INDIRECT=1 "
    fi
    if check_xgl; then
    if [ -f ${LIBGL_NVIDIA} ]; then
    ENV="$ENV LD_PRELOAD=${LIBGL_NVIDIA}"
    verbose "Enabling Xgl with nVidia drivers...\n"
    fi
    if [ -f ${LIBGL_FGLRX} ]; then
    ENV="$ENV LD_PRELOAD=${LIBGL_FGLRX}"
    verbose "Enabling Xgl with fglrx ATi drivers...\n"
    fi
    fi
    ENV="$ENV FROM_WRAPPER=yes"
    if [ -n "$ENV" ]; then
    export $ENV
    fi
    build_args()
    if [ "x$INDIRECT" = "xyes" ]; then
    COMPIZ_OPTIONS="$COMPIZ_OPTIONS --indirect-rendering "
    fi
    if [ ! -z "$DESKTOP_AUTOSTART_ID" ]; then
    COMPIZ_OPTIONS="$COMPIZ_OPTIONS --sm-client-id $DESKTOP_AUTOSTART_ID"
    fi
    if check_nvidia; then
    if [ "x$INDIRECT" != "xyes" ]; then
    COMPIZ_OPTIONS="$COMPIZ_OPTIONS --loose-binding"
    fi
    fi
    # Execution begins here.
    # Read configuration from XDG paths
    if [ -z "$XDG_CONFIG_DIRS" ]; then
    test -f /etc/xdg/compiz/compiz-manager && . /etc/xdg/compiz/compiz-manager
    for f in /etc/xdg/compiz/compiz-manager.d/*; do
    test -e $f && . $f
    done
    else
    OLD_IFS=$IFS
    IFS=:
    for CONFIG_DIR in $XDG_CONFIG_DIRS
    do
    test -f $CONFIG_DIR/compiz/compiz-manager && . $CONFIG_DIR/compiz/compiz-manager
    for f in $CONFIG_DIRS/compiz/compiz-manager.d/*; do
    test -e $f && . $f
    done
    done
    IFS=$OLD_IFS
    unset OLD_IFS
    fi
    if [ -z "$XDG_CONFIG_HOME" ]; then
    test -f $HOME/.config/compiz/compiz-manager && . $HOME/.config/compiz/compiz-manager
    else
    test -f $XDG_CONFIG_HOME/compiz/compiz-manager && . $XDG_CONFIG_HOME/compiz/compiz-manager
    fi
    # Don't use compiz when running the failsafe session
    if [ "x$GNOME_DESKTOP_SESSION_ID" = "xFailsafe" ]; then
    abort_with_fallback_wm
    fi
    if [ "x$LIBGL_ALWAYS_INDIRECT" = "x1" ]; then
    INDIRECT="yes";
    fi
    # if we run under Xgl, we can skip some tests here
    if ! check_xgl; then
    # if vesa or vga are in use, do not even try glxinfo (LP#119341)
    if ! running_under_whitelisted_driver || have_blacklisted_pciid; then
    abort_with_fallback_wm
    fi
    # check if we have the required bits to run compiz and if not,
    # fallback
    if ! check_tfp || ! check_npot_texture || ! check_composite || ! check_texture_size; then
    abort_with_fallback_wm
    fi
    # check if we run with software rasterizer and if so, bail out
    if check_software_rasterizer; then
    verbose "Software rasterizer detected, aborting"
    abort_with_fallback_wm
    fi
    if check_nvidia && ! check_nvidia_memory; then
    abort_with_fallback_wm
    fi
    if ! check_fbconfig; then
    abort_with_fallback_wm
    fi
    fi
    # load the ccp plugin if present and fallback to plain gconf if not
    if [ -f ${PLUGIN_PATH}libccp.so ]; then
    COMPIZ_PLUGINS="$COMPIZ_PLUGINS ccp"
    elif [ -f ${PLUGIN_PATH}libgconf.so ]; then
    COMPIZ_PLUGINS="$COMPIZ_PLUGINS glib gconf"
    fi
    # enable gnomecompat if we run under gnome
    if [ x"$GNOME_DESKTOP_SESSION_ID" != x"" ] && [ ! -e ~/.compiz-gnomecompat ]; then
    verbose "running under gnome seesion, checking for gnomecompat\n"
    if ! gconftool -g /apps/compiz/general/allscreens/options/active_plugins|grep -q gnomecompat; then
    verbose "adding missing gnomecompat\n"
    V=$(gconftool -g /apps/compiz/general/allscreens/options/active_plugins|sed s/mousepoll,/mousepoll,gnomecompat,/)
    if ! echo $V|grep -q gnomecompat; then
    verbose "can not add gnomecompat, reseting\n"
    gconftool --unset /apps/compiz/general/allscreens/options/active_plugins
    else
    gconftool -s /apps/compiz/general/allscreens/options/active_plugins --type list --list-type=string $V
    fi
    touch ~/.compiz-gnomecompat
    fi
    fi
    # get environment
    build_env
    build_args
    if [ "x$CM_DRY" = "xyes" ]; then
    verbose "Dry run finished: everything should work with regards to Compiz and 3D.\n"
    verbose "Execute: ${COMPIZ_BIN_PATH}${COMPIZ_NAME} $COMPIZ_OPTIONS "$@" $COMPIZ_PLUGINS \n"
    exit 0;
    fi
    ${COMPIZ_BIN_PATH}${COMPIZ_NAME} $COMPIZ_OPTIONS "$@" $COMPIZ_PLUGINS || exec $FALLBACKWM $FALLBACKWM_OPTIONS
    That's it so far.
    So again what i try to do is running compiz with metacity decorations.
    I hope someone can help me here,
    Mark.
    Last edited by markg85 (2009-06-28 00:26:04)

    whoops wrote:
    Wait, what, I thought metacity isn't as window decorator, it is a window manager, right? I don't really get what you're trying to do... maybe install compiz-decorator-gtk and use it instead of emerald?
    edit: aaah, yes, I think you're missing compiz-decorator-gtk, try that, look at fusion icon "window decorator" option again.
    Strange.. i was under the impression that pacman -S compiz-fusion would install GTK and QT decorators... guess not.
    This issue is fixed now. Running metacity decorations now WITH compiz.

  • SPNEGOloginmodule & ActiveDirecotry security group

    Hi,
    I have just implemented SPNEGO(2)loginmodule with Active Directory for Web DynPro.
    Users logged into Windows7 desktops use IE can access to portal with kerberos keys. All ok.
    But its possible to restrict this access only to specified group od AD f. ex. "SpecialUsersAD" ?
    I try to find this options in J2EE Visual Admin -> Security Provider -> Ticket -> Spnegologinmodule and set this parameter but maybe is another way to do it ?
    Regards,
    Aleksander GG.

    By default there will only be one each of the Domain Configuration and Forest Configuration objects create for you when you install FIM.  Have you created objects for the second domain?
    Bob Bradley (FIMBob @
    TheFIMTeam.com) ... now using FIM Event Broker for just-in-time delivery of FIM 2010 policy via the sync engine, and continuous compliance for FIM

  • Why does Firefox crash when I open Adobe, Firefox & all plug ins including Adobe are up to date and current, WHY?

    Firefox crashes when I open Adobe (pdf) ALL plug ins are up to date as is FF

    This is a known issue with Adobe Reader, and it depends on Adobe updating their plugin.
    Whilst you're waiting for Adobe to fix their plugin, you can disable it and fallback to the default PDF reader in Firefox by following these steps:
    #Go to ''Tools'' > ''Options'' (or ''Firefox'' > ''Options'').
    #In the Options window, select the ''Applications'' tab.
    #In the ''Search'' field, type ''PDF''. You should find ''Portable Document Format (PDF)''.
    #On the right handside you should find an ''Action'' column. In order to view PDF files in Firefox, choose ''Preview in Firefox''.
    Did this fix your problems? Please report back to us!
    Thank you.

  • Shockwave player and proxy settings

    Hi,
    all clients our company must use a web-proxy to connect with the internet.
    If anyone want to watch a stream with rtmp://....swf , nothing happens.
    RTMP should work with port 1935 and fallback to 443, 80.
    Problem: shockwave ignores the proxy settings from browser and Device. He tries to connect
    directly without proxy, whitch is not working of course..
    Are there any options to configure the proxy settings for Shockwave or is this a bug?
    Thanks for any help!

    I think you're looking for a Flash Player forum

  • Minecraft and the current kernel.

    I've been using Minecraft on Arch for a while and I've had no problems until the recent kernel update. Right now I'm using the fallback
    [dbdii407@myhost ~]$ echo `uname -r`
    2.6.36-ARCH
    ... and I believe that's the fallback version.
    With whatever version is currently available through pacman, it has a problem with minecraft. I'll connect to a server/play singleplayer and the cursor will come out of the game window, and my system will freeze entirely. I cannot move ANYTHING, it's like ice. So, a day later, I decided to run the fallback since I was getting random kernel panics at boot during times, and sure enough, everything is fine!
    If there's any commands you'd wish me to run on my system, I'd be glad to provide those results! I wish to get this resolved.

    Another kernel has been released for update. This fixes the problem with starting the game and it freezing the system. But after playing for a while, my main monitor will go black and my secondary monitor freezes. I still have to force the computer to restart; there's no kernel errors.
    EDIT: This happens with both the normal and fallback kernel. And, to what I have seen, when you attempt to load your world after this problem occurs, minecraft crashes. Once again, There is a kernel update so I will see how this goes.
    Last edited by dbdii407 (2011-02-01 21:11:18)

  • Azure vs. AWS questions (CDN and other services)

    Hi all,
    We are currently migrating from AWS to Azure and we have a bunch of different questions mostly related to CDN (in comparison to AWS) that we couldn't find online:
    What is the right way to have a CDN that returns static content (Storage) and fallbacks to a virtual machine when the content is not available? Think about a service that automatically creates images when you ask it to, and
    saves them on a Storage bucket for later reuse. On AWS we could do this using CloudFront + 404 handling.
    Is it there something similar enough to CloudFront? That can reroute queries based on paths and/or different http codes?
    When hitting a CDN that reroutes to a VM we get "504 Gateway Timeout" on some requests, that same resources return fine if we ask the resource for a second time. What we are doing on that request is asking the Storage
    service for some image and returning a 302 redirect if it's not there. Perhaps the first request takes too long and that's why the CDN throws that error? A fresh example of this was on this URL: http://az3.hinchas.co.uk/t/4/h/f/4hfaQHaZbRfvPSjsQVe4nH_768x768yz.jpg
    What's the recommended or fastest way to access public Storage from a VM? We are currently using the public DNS.
    There is still no way to have nodejs deamons that need some compilation, right? Versioning the compiled objects is not a option for us now, as we only develop on Linux and Mac and it would make our current build process way
    more complex.
    When using Azure's API service, can we customize API routes based on paths or parameters and proxy some requests to amazon in order to ease the migration ? 
    Are there any recommended/official solutions to autoscale cassandra or elasticsearch clusters?
    Any help will be greatly appreciated.
    Thank you!

    Hi,
    Did you talk about azure CDN? If yes, When you enable CDN access for a storage account, the Windows Azure portal provides you with a domain name of the following
    format: http:/ /<guid>.vo.msecnd.net/. This domain name can then be used to access blobs in a public container. When a request is made using the Windows Azure CDN URL, the request is redirected to the CDN endpoint closest to the location from which the
    request was made to provide access to the blob. If the blob is not found at that endpoint, then it is retrieved from the Blob service and cached at the endpoint, where a time-to-live (TTL) setting is maintained for the cached blob. The TTL specifies that the
    blob should be cached for that amount of time in the CDN until it is refreshed by the Blob service. The CDN attempts to refresh the blob from Windows Azure Blob service only once the TTL has elapsed. For more detail information about azure CDN, you could refer
    to:http://azure.microsoft.com/blog/2009/11/05/introducing-the-windows-azure-content-delivery-network/
    If using azure CDN, we could not get the Gateway Timeout error.Fastest
    way to access public Storage, you may consider the file sharing service:http://blogs.msdn.com/b/windowsazurestorage/archive/2014/05/12/introducing-microsoft-azure-file-service.aspx
    For the Azure API related issue, I suggest you ask in Azure API Management forums:https://social.msdn.microsoft.com/Forums/en-US/home?forum=azureapimgmt
    For the autoscaling, you could use the Azure Management API to achieve, more detail information, you could ask in Azure API forums as mentioned.
    Regards.
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • JSTL localizationContext and NitroX behavor

    I have some project that i writing some time (about a year)... I uses J2EE 1.3 (web.xml and so other stuff like jstl 1.0.6, struts 1.2.4...).
    My question is about a default resource bundles, particulary in my case this happens:
    1. I have web.xml file and some of it's part is like this:
    <context-param>
    <param-name>
    javax.servlet.jsp.jstl.fmt.fallbackLocale</param-name>
    <param-value>en_US</param-value>
    </context-param>
    <context-param>
    <param-name>javax.servlet.jsp.jstl.fmt.basename</param-name>
    <param-value>ApplicationResources</param-value>
    </context-param>
    2. I have in my classpath of the project one file ApplicationResources_en_US.properties (only one .properties file).
    ...and the plugin yeelds that he can't fine ApplicationResources ("The resource bundle "ApplicationResources" cannot be resolved.").
    We all know that JSTL uses fallback algorithm for finding proper resource bundle (chap. 8.3 of spec).
    Since web.xml context tag sets:
    - javax.servlet.jsp.jstl.fmt.fallbackLocale (for fallback when the algorithm fails);
    - javax.servlet.jsp.jstl.fmt.basename - for the basename... locale will be set via the first step of algorithm (sec 8.3 of jstl 1 spec).
    Why i have the warrning that "The resource bundle "ApplicationResources" cannot be resolved." since i have fallbackLocale and thus my view of jsp pages doesnt have proper bundle settings (i get only key names like "error.NoAuth") ? It is done portable way - see 8.3 chap of jstl spec.
    PS plugin works fine if i set the localizationContext... in my case it is:
    <context-param>
    <param-name>
    javax.servlet.jsp.jstl.fmt.localizationContext
    </param-name>
    <param-value>ApplicationResources_en_US</param-value>
    </context-param>
    , but i prefer for my app to specify only basename and fallback alg like in case described first of this post.
    ...waiting for answer ;-)
    Student from Poland (Technical University of Gdansk)
    Luke.

    ...i must also mention about the last step of this algorithm - when everything fails the bundle with only basename is tried to retrieve (javax.servlet.jsp.jstl.fmt.basename) ... but this dosent work either when proper file with the [basename].properties file exists in classpath (output dir of eclipse project).
    Best regards.
    Luke.

  • Shared DPs and Boundaries

    When you enable distribution point sharing during migration, it creates boundary groups and adds the CM07 DPs to the boundary groups.  I've found that this causes content location issues for CM12 clients.  For example, I have a few pilot machines
    with CM12 clients.  If I deploy an application or package to those systems, they look for the content on the shared DP and not the CM12 DP.  This happens even when I create another boundary group and add the CM12 DP as well as the client's boundary.
    Why does the client prioritize the shared DP over the CM12 DP?
    Hopefully my description makes sense!
    Thanks for any inisght!
    Jeff

    Hi,
    Please refer to the link below:
    How Preferred Distribution Points and Fallback is working in Configuration Manager 2012
    https://sccmguru.wordpress.com/2012/05/08/how-preferred-distribution-points-and-fallback-is-working-in-configuration-manager-2012/
    Note: Microsoft provides third-party contact information to help you find technical support. This contact information may change without notice. Microsoft does not guarantee the accuracy of this third-party contact information.
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]

  • Cisco ASA 5505 Dual-ISP Backup VPN

    I am trying to create a backup tunnel from an ASA 5505 to a pix 501 in the case of the Main ISP failing.  The Pix external side will stay the same, but not quite sure how I can create a new crypto map and have it use the Backup ISP interface without bringing down the main tunnel.
    My first thought was to add the following crypto map to the configuration below:
    crypto map outside_map 2 match address outside_1_cryptomap
    crypto map outside_map 2 set peer 9.3.21.13
    crypto map outside_map 2 set transform-set ESP-DES-MD5
    crypto map outside_map interface backupisp -->but this would break the current tunnel.
    NYASA# sh run
    : Saved
    ASA Version 7.2(4)
    hostname NYASA
    domain-name girls.org
    enable password CHwdJ2WMUcjxIIm8 encrypted
    passwd 2KFQnbNIdI.2KYOU encrypted
    names
    interface Vlan1
    nameif inside
    security-level 100
    ip address 10.1.2.1 255.255.255.0
    interface Vlan2
    nameif outside
    security-level 0
    ip address 9.17.5.8 255.255.255.240
    interface Vlan3
    description Backup ISP
    nameif backupisp
    security-level 0
    ip address 6.27.9.5 255.255.255.0
    interface Ethernet0/0
    switchport access vlan 2
    interface Ethernet0/1
    interface Ethernet0/2
    switchport access vlan 3
    interface Ethernet0/3
    interface Ethernet0/4
    interface Ethernet0/5
    interface Ethernet0/6
    interface Ethernet0/7
    ftp mode passive
    dns server-group DefaultDNS
    access-list outside_access_in extended permit icmp any any echo-reply
    access-list outside_access_in extended permit icmp any any source-quench
    access-list outside_access_in extended permit icmp any any unreachable
    access-list outside_access_in extended permit icmp any any time-exceeded
    access-list outside_access_in extended permit icmp any any
    access-list inside_nat0_outbound extended permit ip 10.1.2.0 255.255.255.0 10.1.1.0 255.255.255.0
    access-list inside_nat0_outbound extended permit ip 10.1.2.0 255.255.255.0 10.1.100.0 255.255.255.0
    access-list outside_1_cryptomap extended permit ip 10.1.2.0 255.255.255.0 10.1.1.0 255.255.255.0
    access-list outside_1_cryptomap extended permit ip 10.1.2.0 255.255.255.0 10.1.100.0 255.255.255.0
    access-list 150 extended permit ip any host 10.1.2.27
    access-list 150 extended permit ip host 10.1.2.27 any
    pager lines 24
    logging asdm informational
    mtu inside 1500
    mtu outside 1500
    mtu backupisp 1500
    no failover
    icmp unreachable rate-limit 1 burst-size 1
    asdm image disk0:/asdm-524.bin
    no asdm history enable
    arp timeout 14400
    nat-control
    global (outside) 1 interface
    global (backupisp) 1 interface
    nat (inside) 0 access-list inside_nat0_outbound
    nat (inside) 1 0.0.0.0 0.0.0.0
    access-group outside_access_in in interface outside
    route outside 0.0.0.0 0.0.0.0 9.17.5.7 1 track 1
    route backupisp 0.0.0.0 0.0.0.0 6.27.9.1 254
    timeout xlate 3:00:00
    timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02
    timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00
    timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00
    timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute
    aaa authentication ssh console LOCAL
    http server enable
    http 10.1.2.0 255.255.255.0 inside
    no snmp-server location
    no snmp-server contact
    snmp-server enable traps snmp authentication linkup linkdown coldstart
    sla monitor 10
    type echo protocol ipIcmpEcho 4.2.2.2 interface outside
    num-packets 3
    timeout 1000
    frequency 3
    sla monitor schedule 10 life forever start-time now
    crypto ipsec transform-set ESP-DES-MD5 esp-des esp-md5-hmac
    crypto map outside_map 1 match address outside_1_cryptomap
    crypto map outside_map 1 set peer 9.3.21.13
    crypto map outside_map 1 set transform-set ESP-DES-MD5
    crypto map outside_map interface outside
    crypto isakmp identity address
    crypto isakmp enable outside
    crypto isakmp policy 10
    authentication pre-share
    encryption des
    hash md5
    group 2
    lifetime 86400
    crypto isakmp nat-traversal  20
    track 1 rtr 10 reachability
    telnet timeout 5
    ssh 0.0.0.0 0.0.0.0 outside
    ssh timeout 60
    console timeout 0
    management-access inside
    username ptiadmin password BtOLil2gR0VaUjfX encrypted privilege 15
    tunnel-group 9.4.21.13 type ipsec-l2l
    tunnel-group 9.4.21.13 ipsec-attributes
    pre-shared-key *
    prompt hostname context
    Cryptochecksum:22bb60b07c4c1805b89eb2376683f861
    : end
    NYASA#
    Thanks in advance.

    In that case is the PIX who needs two peers (to the ASA).
    The ASA will requiere the crypto map to be applied to the backup interface as well (as you mentioned)
    crypto map outside_map interface backupisp -->but this would break the current tunnel.
    The above command should not break the current tunnel (if the route to reach the other end goes out via the primary interface).
    Additionally you need IP SLA configured in the ASA to allow it to use the primary connection and fallback to the backup connection to build-up the tunnel (as well to use again the primary interface when it recovers).
    Federico.

  • Error while executing MPI enabled program

    Hi All,
    I am new to these type of programming. Linux. and using Intel compiler.
    I have comiled my application successfully but its giving the below error while exuting.
    I am working on linux X86_64
    Please help thanks in advance.
    libibverbs: Fatal: couldn't read uverbs ABI version.
    open_hca: device mlx4_0 not found
    [0] DAPL provider is not found and fallback device is not enabled
    [unset]: aborting job:
    Fatal error in MPI_Init: Other MPI error, error stack:
    MPIR_Init_thread(283): Initialization failed
    MPIDD_Init(98).......: channel initialization failed
    MPIDI_CH3_Init(176)..: generic failure with errno = -1
    Looking for solution
    Reply With Quote

    Hi,
    unfortunately this is a forum for errors related to Oracle RAC (real application cluster) issues.
    As far as i can see your issue is not related. I doubt anybody is able to help you here.
    Ronny Egner
    My Blog: http://blog.ronnyegner-consulting.de

Maybe you are looking for

  • How can I go back to Snow Leopard from lion without loosing files, etc.?

    I recently downloaded Mac OS X Lion, but I can't stand it. I've heard you can't get a refund from the app store but I'm not sure how that works and I'd realy like my money back. Also, if I do downgrade, I'm not sure how to back up my files. I have mu

  • How can I get the value of the previous item

    Hi, I have a block with some records in it. I want to capture the value of the previous item of the cursor position for eg I have emp_no, and emp_name If I click emp_name it goes to different block ang brings up emp info. for that particular employee

  • Clips not displaying in timeline

    I've been having a nightmare of a time editing a project because my clips are randomly not displaying in my sequence timeline.  The project recognizes them as being there and will play them when I preview, but some clips aren't represented graphicall

  • HT5731 How can I stop and delete an incomplete download which I don't want

    I proceeded with download of electronic file from a movie DVD I purchased into itunes. Then realized how big the file was and don't want to download it. I can pause the download and even delete it within the download screen, but the 'available downlo

  • Vendors Dunning - Due Items

    Hi Experts, Can any suggest me how the vendor Invoice Open items and Credit memo items are selected for Dunning. I have the below scenario. There is a invoice and a credit memo posted to a venodr account which are overdue and the overal balance is De