Good way to map SCN = LCR in Capture process?

How does one determine which SCN a LCR comes from? I found a number of undocumented yet potentially relevant views (v$LOGMNR_TRANSACTION, x$lcr, etc), but... well, being undocumented I have to make a number of guesses. grin
We are attempting to disable the Propagation/apply process once a specific SCN has been applied. If you have another suggestion how to accomplish this, please let me know. I have been trying to make my way through the various online documents, but as you probably are well aware of, there is a lot of information to digest.
TIA

Thanks. The tip about dbms_apply_adm is a good one, and I am looking more into it. Having trouble with dba_apply_progress, though. Or, more accurately, having trouble finding a captured SCN that I trust. dba_capture on the downstream database (not target) shows me a lower number for captured_scn than LAST_ENQUEUED_SCN. Based on what I see in the target, I am leaning towards trusting the enqueued_scn and dba_apply_progress.APPLIED_MESSAGE_NUMBER.

Similar Messages

  • Resetting SCN from removed Capture Process

    I've come across a problem in Oracle Streams where the Capture Processes seem to get stuck. There are no reported errors in the alert log and no trace files, but the capture process fails to continue capturing changes. It stays enabled, but in an awkward state where the OEM Console reports zeros across the board (0 messages, 0 enqueued), when in fact there had been accurate totals in the past.
    Restarting the Capture process does no good. The Capture process seems to switch its state back and forth from Dictionary Initialization to Initializing and vice versa. The only thing that seems to kickstart Streams again is to remove the Capture process and recreate the same process.
    However my problem is that I want to set the start_scn of the new capture process to the captured_scn of the remove capture process so that the new one can start from where the old one left off? However, I'm getting an error that this cannot be performed (cannot capture from specified SCN).
    Am I understanding this correctly? Or should the new Capture process start from where the removed left off automatically?
    Thanks

    Hi,
    I seem to have the same problem.
    I now have a latency of round about 3 days while nothing happened in the database so I want to be able to set the capture process to a further SCN. Setting the Start_SCN gives me an error (can't remember it now unfortunately). Somethimes it seems that the capture process gets stuck in an archived log. It then takes a long time for it to go further and when it goes further it sprints through a bunch of logs before it gets stuck again. During that time all the statuses look good, no heavy cpu-usage is monitored. We saw that the capture-builder has the highest cpu-load, where I would expect the capture-reader to be busy.
    I am able to set the first_scn. So a rebuild of the logminer dictionary might help a bit. But then again: why would the capture process need such a long time to process the archived-logs where no relevant events are expected.
    In my case the Streams solution is considered as a candidate for a replication solution where Quest's Sharedplex is considered to be expensive and unable to meet the requirements. One main reason it is considered inadaquate is that it is not able to catch up after a database-restart of a heavy batch. Now it seems that our capture-process might suffer from the same problem. I sincerly hope I'm wrong and it proofs to be capable.
    Regards,
    Martien

  • Capture process issue...archive log missing!!!!!

    Hi,
    Oracle Streams capture process is alternating between INITIALIZING and DICTIONARY INITIALIZATION state and not proceeding after this state to capture updates made on table.
    we have accidentally missing archivelogs and no backup archive logs.
    Now I am going to recreate the capture process again.
    How I can start the the capture process from new SCN ?
    And Waht is the batter way to remove the archive log files from central server, because
    SCN used by capture processes?
    Thanks,
    Faziarain
    Edited by: [email protected] on Aug 12, 2009 12:27 AM

    Using dbms_Streams_Adm to add a capture, perform also a dbms_capture_adm.build. You will see in v$archived_log at the column dictionary_begin a 'yes', which means that the first_change# of this archivelog is first suitable SCN for starting capture.
    'rman' is the prefered way in 10g+ to remove the archives as it is aware of streams constraints. If you can't use rman to purge the archives, then you need to check the min required SCN in your system by script and act accordingly.
    Since 10g, I recommend to use rman, but nevertheless, here is the script I made in 9i in the old time were rman was eating the archives needed by Streams with appetite.
    #!/usr/bin/ksh
    # program : watch_arc.sh
    # purpose : check your archive directory and if actual percentage is > MAX_PERC
    #           then undertake the action coded by -a param
    # Author : Bernard Polarski
    # Date   :  01-08-2000
    #           12-09-2005      : added option -s MAX_SIZE
    #           20-11-2005      : added option -f to check if an archive is applied on data guard site before deleting it
    #           20-12-2005      : added option -z to check if an archive is still needed by logminer in a streams operation
    # set -xv
    #--------------------------- default values if not defined --------------
    # put here default values if you don't want to code then at run time
    MAX_PERC=85
    ARC_DIR=
    ACTION=
    LOG=/tmp/watch_arch.log
    EXT_ARC=
    PART=2
    #------------------------- Function section -----------------------------
    get_perc_occup()
      cd $ARC_DIR
      if [ $MAX_SIZE -gt 0 ];then
           # size is given in mb, we calculate all in K
           TOTAL_DISK=`expr $MAX_SIZE \* 1024`
           USED=`du -ks . | tail -1| awk '{print $1}'`    # in Kb!
      else
        USED=`df -k . | tail -1| awk '{print $3}'`    # in Kb!
        if [ `uname -a | awk '{print $1}'` = HP-UX ] ;then
               TOTAL_DISK=`df -b . | cut -f2 -d: | awk '{print $1}'`
        elif [ `uname -s` = AIX ] ;then
               TOTAL_DISK=`df -k . | tail -1| awk '{print $2}'`
        elif [ `uname -s` = ReliantUNIX-N ] ;then
               TOTAL_DISK=`df -k . | tail -1| awk '{print $2}'`
        else
                 # works on Sun
                 TOTAL_DISK=`df -b . | sed  '/avail/d' | awk '{print $2}'`
        fi
      fi
      USED100=`expr $USED \* 100`
      USG_PERC=`expr $USED100 / $TOTAL_DISK`
      echo $USG_PERC
    #------------------------ Main process ------------------------------------------
    usage()
        cat <<EOF
                  Usage : watch_arc.sh -h
                          watch_arc.sh  -p <MAX_PERC> -e <EXTENTION> -l -d -m <TARGET_DIR> -r <PART>
                                        -t <ARCHIVE_DIR> -c <gzip|compress> -v <LOGFILE>
                                        -s <MAX_SIZE (meg)> -i <SID> -g -f
                  Note :
                           -c compress file after move using either compress or gzip (if available)
                              if -c is given without -m then file will be compressed in ARCHIVE DIR
                           -d Delete selected files
                           -e Extention of files to be processed
                           -f Check if log has been applied, required -i <sid> and -g if v8
                           -g Version 8 (use svrmgrl instead of sqlplus /
                           -i Oracle SID
                           -l List file that will be processing using -d or -m
                           -h help
                           -m move file to TARGET_DIR
                           -p Max percentage above wich action is triggered.
                              Actions are of type -l, -d  or -m
                           -t ARCHIVE_DIR
                           -s Perform action if size of target dir is bigger than MAX_SIZE (meg)
                           -v report action performed in LOGFILE
                           -r Part of files that will be affected by action :
                               2=half, 3=a third, 4=a quater .... [ default=2 ]
                           -z Check if log is still needed by logminer (used in streams),
                                    it requires -i <sid> and also -g for Oracle 8i
                  This program list, delete or move half of all file whose extention is given [ or default 'arc']
                  It check the size of the archive directory and if the percentage occupancy is above the given limit
                  then it performs the action on the half older files.
            How to use this prg :
                    run this file from the crontab, say, each hour.
         example
         1) Delete archive that is sharing common arch disk, when you are at 85% of 2500 mega perform delete half of the files
         whose extention is 'arc' using default affected file (default is -r 2)
         0,30 * * * * /usr/local/bin/watch_arc.sh -e arc -t /arc/POLDEV -s 2500 -p 85 -d -v /var/tmp/watch_arc.POLDEV.log
         2) Delete archive that is sharing common disk with oother DB in /archive, act when 90% of 140G, affect by deleting
         a quater of all files (-r 4) whose extention is 'dbf' but connect before as sysdba in POLDEV db (-i) if they are
         applied (-f is a dataguard option)
         watch_arc.sh -e dbf -t /archive/standby/CITSPRD -s 140000 -p 90 -d -f -i POLDEV -r 4 -v /tmp/watch_arc.POLDEV.log
         3) Delete archive of DB POLDEV when it reaches 75% affect 1/3 third of files, but connect in DB to check if
         logminer do not need this archive (-z). this is usefull in 9iR2 when using Rman as rman do not support delete input
         in connection to Logminer.
         watch_arc.sh -e arc -t /archive/standby/CITSPRD  -p 75 -d -z -i POLDEV -r 3 -v /tmp/watch_arc.POLDEV.log
    EOF
    #------------------------- Function section -----------------------------
    if [ "x-$1" = "x-" ];then
          usage
          exit
    fi
    MAX_SIZE=-1  # disable this feature if it is not specificaly selected
    while getopts  c:e:p:m:r:s:i:t:v:dhlfgz ARG
      do
        case $ARG in
           e ) EXT_ARC=$OPTARG ;;
           f ) CHECK_APPLIED=YES ;;
           g ) VERSION8=TRUE;;
           i ) ORACLE_SID=$OPTARG;;
           h ) usage
               exit ;;
           c ) COMPRESS_PRG=$OPTARG ;;
           p ) MAX_PERC=$OPTARG ;;
           d ) ACTION=delete ;;
           l ) ACTION=list ;;
           m ) ACTION=move
               TARGET_DIR=$OPTARG
               if [ ! -d $TARGET_DIR ] ;then
                   echo "Dir $TARGET_DIR does not exits"
                   exit
               fi;;
           r)  PART=$OPTARG ;;
           s)  MAX_SIZE=$OPTARG ;;
           t)  ARC_DIR=$OPTARG ;;
           v)  VERBOSE=TRUE
               LOG=$OPTARG
               if [ ! -f $LOG ];then
                   > $LOG
               fi ;;
           z)  LOGMINER=TRUE;;
        esac
    done
    if [ "x-$ARC_DIR" = "x-" ];then
         echo "NO ARC_DIR : aborting"
         exit
    fi
    if [ "x-$EXT_ARC" = "x-" ];then
         echo "NO EXT_ARC : aborting"
         exit
    fi
    if [ "x-$ACTION" = "x-" ];then
         echo "NO ACTION : aborting"
         exit
    fi
    if [ ! "x-$COMPRESS_PRG" = "x-" ];then
       if [ ! "x-$ACTION" =  "x-move" ];then
             ACTION=compress
       fi
    fi
    if [ "$CHECK_APPLIED" = "YES" ];then
       if [ -n "$ORACLE_SID" ];then
             export PATH=$PATH:/usr/local/bin
             export ORAENV_ASK=NO
             export ORACLE_SID=$ORACLE_SID
             . /usr/local/bin/oraenv
       fi
       if [ "$VERSION8" = "TRUE" ];then
          ret=`svrmgrl <<EOF
    connect internal
    select max(sequence#) from v\\$log_history ;
    EOF`
    LAST_APPLIED=`echo $ret | sed 's/.*------ \([^ ][^ ]* \).*/\1/' | awk '{print $1}'`
       else
        ret=`sqlplus -s '/ as sysdba' <<EOF
    set pagesize 0 head off pause off
    select max(SEQUENCE#) FROM V\\$ARCHIVED_LOG where applied = 'YES';
    EOF`
       LAST_APPLIED=`echo $ret | awk '{print $1}'`
       fi
    elif [ "$LOGMINER" = "TRUE" ];then
       if [ -n "$ORACLE_SID" ];then
             export PATH=$PATH:/usr/local/bin
             export ORAENV_ASK=NO
             export ORACLE_SID=$ORACLE_SID
             . /usr/local/bin/oraenv
       fi
        var=`sqlplus -s '/ as sysdba' <<EOF
    set pagesize 0 head off pause off serveroutput on
    DECLARE
    hScn number := 0;
    lScn number := 0;
    sScn number;
    ascn number;
    alog varchar2(1000);
    begin
      select min(start_scn), min(applied_scn) into sScn, ascn from dba_capture ;
      DBMS_OUTPUT.ENABLE(2000);
      for cr in (select distinct(a.ckpt_scn)
                 from system.logmnr_restart_ckpt\\$ a
                 where a.ckpt_scn <= ascn and a.valid = 1
                   and exists (select * from system.logmnr_log\\$ l
                       where a.ckpt_scn between l.first_change# and l.next_change#)
                  order by a.ckpt_scn desc)
      loop
        if (hScn = 0) then
           hScn := cr.ckpt_scn;
        else
           lScn := cr.ckpt_scn;
           exit;
        end if;
      end loop;
      if lScn = 0 then
        lScn := sScn;
      end if;
       select min(sequence#) into alog from v\\$archived_log where lScn between first_change# and next_change#;
      dbms_output.put_line(alog);
    end;
    EOF`
      # if there are no mandatory keep archive, instead of a number we just get the "PLS/SQL successfull"
      ret=`echo $var | awk '{print $1}'`
      if [ ! "$ret" = "PL/SQL" ];then
         LAST_APPLIED=$ret
      else
         unset LOGMINER
      fi
    fi
    PERC_NOW=`get_perc_occup`
    if [ $PERC_NOW -gt $MAX_PERC ];then
         cd $ARC_DIR
         cpt=`ls -tr *.$EXT_ARC | wc -w`
         if [ ! "x-$cpt" = "x-" ];then
              MID=`expr $cpt / $PART`
              cpt=0
              ls -tr *.$EXT_ARC |while read ARC
                  do
                     cpt=`expr $cpt + 1`
                     if [ $cpt -gt $MID ];then
                          break
                     fi
                     if [ "$CHECK_APPLIED" = "YES" -o "$LOGMINER" = "TRUE" ];then
                        VAR=`echo $ARC | sed 's/.*_\([0-9][0-9]*\)\..*/\1/' | sed 's/[^0-9][^0-9].*//'`
                        if [ $VAR -gt $LAST_APPLIED ];then
                             continue
                        fi
                     fi
                     case $ACTION in
                          'compress' ) $COMPRESS_PRG $ARC_DIR/$ARC
                                     if [ "x-$VERBOSE" = "x-TRUE" ];then
                                           echo " `date +%d-%m-%Y' '%H:%M` : $ARC compressed using $COMPRESS_PRG" >> $LOG
                                     fi ;;
                          'delete' ) rm $ARC_DIR/$ARC
                                     if [ "x-$VERBOSE" = "x-TRUE" ];then
                                           echo " `date +%d-%m-%Y' '%H:%M` : $ARC deleted" >> $LOG
                                     fi ;;
                          'list'   )   ls -l $ARC_DIR/$ARC ;;
                          'move'   ) mv  $ARC_DIR/$ARC $TARGET_DIR
                                     if [ ! "x-$COMPRESS_PRG" = "x-" ];then
                                           $COMPRESS_PRG $TARGET_DIR/$ARC
                                           if [ "x-$VERBOSE" = "x-TRUE" ];then
                                                 echo " `date +%d-%m-%Y' '%H:%M` : $ARC moved to $TARGET_DIR and compressed" >> $LOG
                                           fi
                                     else
                                           if [ "x-$VERBOSE" = "x-TRUE" ];then
                                                 echo " `date +%d-%m-%Y' '%H:%M` : $ARC moved to $TARGET_DIR" >> $LOG
                                           fi
                                     fi ;;
                      esac
              done
          else
              echo "Warning : The filesystem is not full due to archive logs !"
              exit
          fi
    elif [ "x-$VERBOSE" = "x-TRUE" ];then
         echo "Nothing to do at `date +%d-%m-%Y' '%H:%M`" >> $LOG
    fi

  • Good way to capture audio?

    I'm looking for a quick and easy way, if there is such a thing, for capturing audio in OSX. I'm doing scoring work in Logic, and am always coming across reference ideas in movies, shows, etc., that I'd love to grab. Is there any good way to capture audio streaming from, for example, netflix or youtube? Or any audio that's flowing through the mac at any time. Doesn't have to be great quality, just as reference.
    The only way I've been able to do this is rent dvds and record that from a dvd player etc into my audio interface, into logic or garageband, audacity, etc. Seems like since Netflix is already streaming through the mac (somehow, I'm technologically illiterate, it would be great to be able to just grab a few minutes or seconds of audio as it goes by.)
    Thanks!

    I'd take a look at Rogue Amoeba's Audio Hijack Pro.
    http://www.rogueamoeba.com/audiohijackpro/
    Matt

  • Best way to map server shares to drives

    I've always used logon.bat for mapping server shares to client network drive letters.  However, I'm being told that setting up group policies is the proper way to map network drives.   Anyone have any insight on this?  It seems that group
    policies at this level are kind of buried in Server Essentials, so it makes me think MS didn't intend for Server Essentials to use GPs for things like this. 
    If GPs is the proper way to map network drives, is there a good howto for this?  I didn't have any luck the first time I tried this.
    Thanks,
    Greg
    Greg Zartman

    Hi Greg
    I suggest you try the following. 
    1. Create a GPO and link it to the relevant users OU. 
    2. Right clink on the GPO link and select Edit. 
    3. Select User Configuration and expand Preferences > Windows Settings.
    4. Select Drives Map and right click on the right side of the window then select NEW > Mapped Drive
    5. within the opened window in the Action select CREATE.
    6. Then type the folder location and select a drive letter. 
    7. Under Show/Hide Drive select Show this drive / show all drives and remember to check the Reconnect box in order to reconnect at log-in. 
    8. Then navigate to Common Tab. Check the box Item Level Targeting and select Targeting in order to assign to which user group you would like to assign it. Apply and OK
    9. Go to CMD and type gpupdate /force
    10. logoff/logon the users and test. 
    You are done. :)
    Aref

  • Mapping the High Speed Capture signal to RTSI

    Hello,
    Can i Mapp the "High Speed Capture" signal to RTSI ?
    When i'm using the motion RTSI example and mae some changes: 
    i'm setting the source to "High speed capture"  and the destination to RTSI_0 i 'm getting an error that its possible!
    i want to use this input to trigger an action on other pci.
    I'm using pci-7344 with umi - 7774 and pcie-1430
    Thanks
    Mor
    Message Edited by MotiM on 08-01-2009 08:47 AM

    Dear Jochen,
    Thank you for your reply.
    I try to map the High Speed capture to RTSI , but the HSC doesnt work.
    When i remove the RTSI mapping from my diagram, the HSC work and i can capture the encoder position.
    I'm using the UM7774, and the HSC is wired to the global connectors (TRIGGER/BREAKPOINT connector)
    I also read all the relevant documentation about my hardware and i notice that there is a comment about mapping the motion rtsi :
    (from Select Signal vi Help):  Note  You must route signals from the RTSI lines before you enable high-speed capture ,
    so i also consider that, it still doesnt work....
    I really appreciate if you can take a look about the 2 versions of vi i attached here.
    They both need to do the same.
    Each vi contains 2 parallel diagrams, one diagram for the single axis move, the move is to target position X,
    the second diagram is for the vision, there is "trigger each line" from rtsi line (the motion diagram includes maping the encoder phase A to the rtsi for this purpose)
    and i want to use the High speed capture to trigger the start of imaq also.
    The example :
     HSC in motion activate the start of imaq in second loop.vi 
    is a working example that runs good and capture the image.
    but its "dirty" programming, i dont think its should work like this.
    its actualy a implementation of "busy wait" loop.... ( i dont like it, but work...)
    The example :  
    HSC triger directly the start of imaq.vi
    is hoe i think its should be ( i know that i need to trigger the HSC whitin the time of the timeout of waiting to the start trigger of imaq to accure, that is why i put a big number there... )
    BUT, this vi doesnt work, the High speed capture doesnt happend at all...
    Can you take a look on these vi and let me know what i'm doing wrong here.
    I really appriciate your help.
    Regards
    Mor
    Message Edited by M0Reng on 08-03-2009 02:14 PM
    Attachments:
    HSC in motion activate the start of imaq in second loop.vi ‏72 KB
    HSC triger directly the start of imaq.vi ‏71 KB

  • Whats a good way switch between a real and mock data service?

    I have a new Flex application...and am contemplating the use of the Mate mock remote object capability.
    However, I'd like the application to be able to switch back and forth between a real and mock data service without recompilation.  And without loading the mock service objects unless they are needed.
    Whats a good way to handle this switching without recompiling the application?

    The only way I can think of not having ANY mock services in the production code is to take them out manually.  Your program for mocking purposes is going to have to be able to know where these services/object are and how to contact them.  I'm sure if it had to be done , you could use some mass of modules and event maps.
    Sincerely ,
      Ubu

  • Does anyone know if apple's one-to-one program would be a good way to learn how to use logic pro or am I better off going to school to learn audio engineering or something?

    Of course going to school would be a good option, but I want to know if one-to-one is also a good way to learn how to use logic pro. Has anyone been through the one-to-one program for logic pro and can say that they learned how to use logic pro well because of it?

    For sure, one to one training, if given by a tutor who is capable, will hand you the means to build up self-confidence  and will therefore let you operate the hard/software in an intelligent manner, instead of going for the trial and error method which has its pro's and con's too. Once you've passed this beginners phase you will make your own decisions intelligently and then you will also start to get experience and learn even from your mistakes. Something like that in theory and the rest is up to you!
    Have a nice day

  • What are the good ways to send a big file( 20MB-100MB) file to my friend?

    what are the good ways to send a big file( 20MB-100MB) file to my friend?
    Thanks in advance

    if this is with the internet, iChat is probly your best bet,
    but if you just want a transfer,
    plug a firewire into both of your computers, shutdown one of them, hold "T" and press the power button, the restarted computer should pop up as an external drive on the second computer.

  • Is there a good way to automate changing existing links to external PDFs to instead be to PDF attachments with same names?

    Greetings!
    Can anyone tell me a good way to automate changing a "main" PDF's existing hyperlinks to external PDFs to instead be links to PDF attachments to the main PDF (the attached PDFs to have the same filenames as the external PDFs) ?
    Format of current link in main PDF:
    Go to a page in another document
    File: C:\\path\filename.pdf
    Destination name: P:1
    When link is manually changed to link to a PDF that has been attached to the main PDF:
    Go to a page in another document
    File: PDF attachment
    Page: 1
    Zoom level: Fit page
    The new PDF link does not list the PDF attachment filename in the link.
    The main PDF (the one that has the links that I would like to automatically change) is generated with the hyperlinks to other PDFs from Adobe FrameMaker 11.  In FrameMaker11, we added special marker text "openpage filename.pdf:1" that is "automatically" changed to be hyperlinks to external PDFs when the main PDF is created (postscript file is processed) by Adobe Acrobat Pro 11.
    My first choice is to be able to change the format of the special marker text in Adobe FrameMaker source file, but I have not found way to specify a link to be to a PDF attachment with a certain filename.
    However, I would also really like the option of running a batch file on a main PDF to change the links there.
    Has anyone tried using that Adobe ExtendScript ToolKit CS6 to do this? That is, a script that could automatically modify the hyperlinks in a PDF to link to PDF attachments rather than external PDFs, or else modify the unprocessed links while they are still in Postscript file format, or else modify the marker text in FrameMaker to instead of creating a link to a PDF in the same directory as the main PDF being processed, to create a link to a PDF attachment files?
    Thank you,
    Judith Wallace
    [email protected]

    FrameMaker's Hypertext feature has no ability to "link" to an attachment on/in a target PDF.
    PDFs sourced from FM can have named destinations set in the FM authoring files such that a link from file01.fm to the specified named destination in file01.fm will be functional in the PDFs created from the FM files.
    Then, of course, there's the dynamics possible by use o FM Books. But that's the grist for the mills at the FM user-2-user forum.
    RE: Scripting - PDF scripting is via Acrobat JavaScript. That something still of and by Adobe although it may migrate to an ISO Standard (or be rolled up in a future ISO 32000).
    So, to the point - look to Acrobat JavaScript for scripting.
    Be well...

  • Any good way to store pdf files in oracle 10g?

    have, i have many pdf files, i want to store in database. Any good way to do that in 10g? Or, I have to go to 11?
    thanks,
    t.k.

    mbobak wrote:
    Justin Cave wrote:
    user10217806 wrote:
    Can I store pdf in blob, and still search words in the pdfs?Probably not, no. If you want to search the data inside Oracle, you'll want to look at Oracle Multimedia.I think the correct answer is "not efficiently". :-)Well, there is nothing that prevents you from building Oracle Multimedia yourself and getting similar levels of efficiency. Other than a desire not to invest dozens of man-years replicating delivered functionality, of course.

  • Safari and Others Hang. Is there a good way to diagnose which process is being a trouble maker?

    I have a peculiar problem that has been developing for some time now.
    Current Symptoms:
      Safari (Any Version 5.0-Current) will load but WebProcess.app Opens to 100%
               on one core of CPU (Non-Responsive) and no web-pages will load.
      VLC.app launches in the same fashion, one process pegs to 100%.
      iMovie.app hangs.  Not crashing, but failing to successfully launch.
      Apple Script saved as an application behaves in the same way.
    Attempted Diagnosis:
          I'm currently running OS 10.7.3 on a Early 2009 Mac Pro.
         I have tried:
      Deleted Preferences, and Saved Application States. Then realized the
    problem can't be in ~/Lib/Prefs. because it also is a problem on a clean user
    profile I created to check. So I also looked in /Sys/Lib/Prefs. with no Success.
      I have run disk permissions more times that I can count. 
                    This comes up with regularity but solves nothing:
      Group differs on “Library/Preferences/com.apple.alf.plist”; should be 80; group is 0.
      Repaired “Library/Preferences/com.apple.alf.plist”
      Permissions differ on “usr/lib/ruby”; should be lrwxr-xr-x ; they are drwxr-xr-x .
      Repaired “usr/lib/ruby”
      Reinstalled 10.7.3 from the Combo Updater from apples website.  No Help.
      I did find launching any affected App from the Unix Exe File in the .app
             bundles (So it launches in terminal)  will open them and allow them to
             runwithout issue (yea! temporary fix)! 
    Finally I booted with -x into Safe Mode.  There the problem vanished.  Hooray
            it's another piece of  software or an extension I have loaded (Could  be any over
            the past 6 months or so)!
    So, I rebooted and tried to launch Safari immediately. It worked!  Only after a
            minute or so, when the GUI and background Apps had finished
            loading, the problem returns.
    What I need help with:
    Is there any good way to tell what application is causing the problem or what
    these four apps have in common?  I have looked at Console Logs, Crash Reports,
    etc. etc. but admittedly am not sure what to look for.  I would be happy to post logs
    or reports or more detailed info, if it would help.
    Otherwise Thank-you to anyone who reads this and has any good ideas short
    of a fresh install. Cheers!

    Hi...
    Reinstalled 10.7.3 from the Combo Updater from apples website.
    The only way to reinstall the Mac OS X or repair the startup disk running v10.7.3 Lion, is to use Lion Recovery The combo update does not do that.
    How much free space on the startup disk? Not enough free space can account for the problems with your apps.
    Right or control click the MacintoshHD icon. Click Get Info. In the Get Info window you will see Capacity and Available. Make sure there's a minimum of 15% free disk space.
    and no web-pages will load.
    Try using OpenDNS as suggested here >  Safari 5.0.1 or later: Slow or partial webpage loading, or webpage cannot be found
    Use OpenDNS for better speed, more security, includes anti phishing filters, prevents browser redirects, and it's free.
    Open System Preferences / Preferences then select the Network tab. Click the Advanced tab then click the DNS tab.
    Click +
    Enter these addresses exactly as you see them here.
    208.67.222.222
    Click +
    208.67.220.220
    Then click OK.
    edited by:  cs

  • What's a good way to dynamically allow shortened commands?

    I have a BufferedReader set up that attaches to System.in.
    so, when I check the BufferedReader, it will retreive any commands put in by a user.
    I will have a hash table for commands set up like so:
    "north", "dn"
    "west", "dw"
    "south", "ds"
    "east", "de"
    "follow", "follow"
    "flee", "flee"
    "flip", "flip"
    So, what I want is to be able to check all the commands and shorten them to the lowest amount of chars before they're the same.
    Therefore, for the example hash list, if I typed in "fle", the code would select "flee". Or, if I typed "fo", the code would select "follow". But, if I typed in "f", then the code would respond with a "command not found" error, because "f" could be follow, flip or flee. I want to make it so that I can type "follow", "follo", "foll", "fol", "fo", and "fo" to select the same "follow" command.
    Has anyone found a good way of going about this? If my description is confusing, I'll try to re-explain in greater detail.

    Demo: I'm using the nul character to represent the end of the word, so that the data structure can represent that "hell" and "hello" are both in the vocabulary:
    import java.util.*;
    class Node {
        private SortedMap<Character, Node> children = new TreeMap<Character, Node>();
        //0 <= index <= word.length()
        private void add(String word, int index) {
            if (index == word.length()) {
                children.put(Character.valueOf('\u0000'), null);
            } else {
                char ch = word.charAt(index);
                Node child = children.get(ch);
                if (child == null) {
                    children.put(ch, child = new Node());
                child.add(word, index+1);
        public void add(String word) {
            if (word == null || word.length()==0)
                throw new IllegalArgumentException();
            add(word, 0);
        public String toString() {
            return children.toString();
    public class Example {
        public static void main(String[] args) throws Exception {
            Node root = new Node();
            root.add("hello");
            root.add("how");
            root.add("who");
            root.add("hell");
            System.out.println(root.toString());
    }

  • What's a good way to manage custom schema for DS  5.1?

    What's a good way to manage custom schema?
    Custom Schema for Object Class and Attributes
    The reason I ask this is because there might be a need in the future where I need to export those custome schema into different branded directory server. I just want to make this as painless as possible.
    Right now, I thought of 2 options
    1) Create my own LDIF file with my custom attributes and object classes, so if one day I need to export to another directory server, I can just copy that custom created LDIF file over. (Will this work?)
    2) Create a JAVA application using JNDI. What this Java App. will do is read through a XML file and create those object classes and attributes on-the-fly. (of course, the XML structure will be predefined by me, so that my Java App. will be able to parse through it correctly. Will this work?)
    Anymore suggestion? I would want to hear more advices and suggestions.
    Also, I assume that will work even with replication. All I need to update is the master server, and the slaves will replicate automatically.
    Thank you very much! :)

    Demo: I'm using the nul character to represent the end of the word, so that the data structure can represent that "hell" and "hello" are both in the vocabulary:
    import java.util.*;
    class Node {
        private SortedMap<Character, Node> children = new TreeMap<Character, Node>();
        //0 <= index <= word.length()
        private void add(String word, int index) {
            if (index == word.length()) {
                children.put(Character.valueOf('\u0000'), null);
            } else {
                char ch = word.charAt(index);
                Node child = children.get(ch);
                if (child == null) {
                    children.put(ch, child = new Node());
                child.add(word, index+1);
        public void add(String word) {
            if (word == null || word.length()==0)
                throw new IllegalArgumentException();
            add(word, 0);
        public String toString() {
            return children.toString();
    public class Example {
        public static void main(String[] args) throws Exception {
            Node root = new Node();
            root.add("hello");
            root.add("how");
            root.add("who");
            root.add("hell");
            System.out.println(root.toString());
    }

  • What's a good way to do a thread dump into a separate file

    What is a good way to do a thread dump automatically into a separate file.
    Example. I run a script to do the thread dump, but unfortunetly, it goes into my stdout log file with the rest of my weblogic errors.
    Any ideas? I want it in a separate file when I run my script?

    Do a Google search on "Drobo S" "benchmark."  I don't have a Drobo S, only the regular Drobo.  But here's a guy who tested one on Windows:
    http://mansurovs.com/drobo-s-review-usb-3-0-2nd-generation
    This one has it a bit faster:
    http://the-gadgeteer.com/2011/12/31/drobo-s-storage-array-review/
    Do read up on a few reviews of it, and be absolutely clear that interface speed (i.e. eSATA versus Firewire versus Thunderbolt) is NOT the same as the performance of the system.  The Drobo cannot keep up with any interfaces... at least the Drobo and the Drobo S cannot.
    I am not using the FS model which is a NAS.  I am using the plain old "Drobo" which is slower than the Drobo S, but that's not to say that the Drobo S is fast, because it is not.
    The Drobo in theory is really attractive: Dead simple to manage, can mix and match drive sizes, offers you some data protection, etc.  However do note that protected storage is not, in and of itself, a backup.  You need other backups besides just the data on the Drobo.  And, because it's so slow, it's really not a great fit for photo storage.  See this review from a guy who used to think the Drobo was great for that and then appended his review:
    http://www.stuckincustoms.com/drobo-review/
    To be as clear as possible, IMO the BEST backup strategy with something like Aperture (so long as your managed Aperture library is of a manageable size, like < 800 GB), is to get a few small portable Firewire 800 drives and keep vaults on each one.  They are great because they are easy to use, to have with you, are bus powered, and you WILL offsite them.

Maybe you are looking for