Productsign behaviour change in 10.8.3

Hi,
I used to sign installer packages on 10.7 with a self signed certificate and it used to get signed. My selfsigned certificate was basically a "codesign" type of certificate and I have the Private Key for it also on the machine on which I am signing.
In 10.8 onwards the same productsign command is not working.
So the product sign command is:
productsign --sign <My Cert Identity> <input file> <outputfile>
This used to work on 10.7 but on 10.8 it gives error:
productsign error: Could not find appropriate signing identy for "<My Cert identity>"
I think the problem is because My Selfsigned certificate is a "codesign" certificate and not an installer signing certificate.
But since this same command was working on 10.7 it seems the behaviour of productsign has changed and now it has become more strict.
I want to ask if my understanding correct.
Secondly how can I create a self signed installer signing identity on mac so that I can sign my installer on 10.8 (I do not have Apple Developer Installer ID)
Thanks a lot in advance,
Amit

Try posting here:
https://discussions.apple.com/community/developer_forums

Similar Messages

  • XI 3.0 SP15: collapseContext behaviour changed ???????

    Hi.
    I'm having i nightmare:
    In one of my mapping I apply "collapseContext" to a queue that contains only [SUPPRESS]....
    With the production environment (SP12) the result queue is [SUPPRESS] again, while in the test environment (SP15)the result queue contains:
    [SUPPRESS]
    Does anybody noticed this behaviour change with the SP15? Is this a bug and perhaps there is already a solution?
    Sorry for the panic, but we have a very critical situation, because we cannot upgrade the production to SP15 otherwise our flow doesn't work, and we cannot make maintenance because the DEV environment is at a different SP level.
    Thanks in advance
    Alessandro

    Hi Alessandro,
    The behavior of the "collapseContext" didn't change. There was only a bugfix some time ago. Anyway, based on your description I could not reproduce the problem on our SP15 patch test system. I get the same result as you have on SP12 system.
    I've just got an idea, that maybe you're confusing the context change with an empty string value, because since SP13 context changes also have values printed on them, when you display a queue.
    Do you have an example of Message-Mapping producing different result in SP15 than in SP12?
    Regards,
      Alexey Borodin

  • Keychain behaviour changed

    Hi,
    I have recently installed Google Chrome. When it asked for access to my keychain for various passwords saved by Safari, I clicked "Allow Always" and without asking my keychain password, Chrome was given the right to use the particular passwords.
    This is a change in behaviour - in the past when I clicked on "Allow" or "Allow Always" keychain would ask my keychain password. I really liked this feature: if someone would get to use my laptop without my permission, I was happy that they would not be able get my passwords. But now, it would be easy for a malicious user to write an application which can extract passwords, and then when keychain asks whether that application is allowed to access a given password, it will be a simple click to convince keychain give that password out.
    Could someone please: 1-confirm this behaviour change? 2-tell me how to go back to the old safer behaviour if possible...
    I look forward to some answers - please test this feature with say Chrome or Camino or whatever and tell me what your experience is (on 10.6.2). I am worried that this behaviour change could be the result of some tampering with my laptop. So if you don't experience the issue I described, I would be really happy to hear your comments.
    Thanks a lot,

    Hi Thomas,
    I am aware of the keychain unlocking, etc. However please note that I am not talking about a given application using information from an unlocked keychain. Safari is "Always Allowed" to use the password for a given page when the keychain in unlocked, and this has always been the case. But my situation is different.
    To make is more clear, let's say you run the keychain access application and double click on an existing item: at the bottom there is a little box which allows you to see what the recorded password is. When you click on that box you are asked to reconfirm your keychain password - even when for items that are part of an unlocked chain. Now, the problem I described would allow a malicious user to overcome this security feature. The fact that I unlocked a keychain and allowed a given application to use a particular password should NOT allow some other application access without a reconfirmation.
    I have never played with the Keychain properties before - I have always used the default behaviour and the default behaviour seems to have changed somehow. Are you saying that your default behaviour (when an app tried to use the passwords from another app) was different?
    Thanks for the response.
    Cheers,

  • Behaviour change of synchronous reply after patching to 10.1.3.4

    Hi people ,
    i have a process like this:
    [http://www.jettmedia.com/images/630_syncReply10134Test.jpg]
    Normally in 10.1.3.1 , 10.1.3.3.x ; the process will continue after a partial reply, and any fault uncatched in the process would terminate the flow. And any changes to the outputvariable wont change the process response.
    But after patching to 10.1.3.4 im having a different behaviour on synchronous process replys ,
    The process continues and any uncatched fault AFTER A REPLY its raised to the client, and any change to the outputvariable AFTER A REPLY its changed to the process response .
    The worst thing its that is not clear when the response its really sent to the client , ( if i put a wait activity , after the reply, with 1 minute , works as was expected, like in 10.1.3.3 ... )
    I read this new features list from oracle
    http://www.oracle.com/technology/tech/soa/cab/oraclesoacab-webinar-04-22-08bpel_10_1_3_4_update.pdf
    It says that "Faults in a transient process go to the client" , correcting an old problem of lost instances..
    I cant find any other documentation explaining this new beahaviour but for my organization its a huge problem afecting , many production processes, on the patched servers. Because i raised faults to leave the instances marked as faulted on the console as a normal procedure on the flows , but now , if that throw its close enough after a reply(with a unknow criteria or number of activities) its raised to the client :S
    Thanks for any help , by now the workaround to keep the same behaviour its: WAIT after every reply :S :S !!
    Greetings
    Dam

    Yes its a fire and forget pattern, thanks for the suggestion, il. try it ...
    but its difficult to refactor several flows just for patching , you have any idea if this change after upgrade to 10.1.3.4 its normal ? this is an expected beahaviour ?
    Thank you very much again

  • Front Row behaviour changed

    Hi.
    I am using my mac mini as a media station and to watch TV a lot. To bring Front Row and eyetv together, I used this tip of elgato: http://forums.elgato.com/viewtopic.php?f=96&t=1360
    Despite the performance issues of Front Row (http://www.tuaw.com/2009/09/01/front-row-performance-takes-a-dive-with-newest-ki tty/) the configuration of Front Row changed after the Snow Leopard Update.
    It is not possible to access Front Row with a long press of the menu button from eyetv anymore. Does anybody know how to get back the default behaviour of the eyetv tip?

    Hey,
    as someone earlier said it seems to be an issue with the screen saver.
    However i didn´t have to turn mine off alltogether. If you go to the settings pane in Front Row and turn the screen saver off it is just inactive when Front Row is enabled. Why would you want a screen saver then anyways?
    This fixed the problem for me.

  • Behaviour change in POST from Forms 6.0.5 to 6.0.8

    Is there a known change in the behaviour of the POST built in from version 6.0.5 to 6.0.8 of Forms?
    I'm working with a numebr of forms, originally genreated from Designer 6, that rely on a server side trigger to fill in a Unique Key field in a record (i.e. field = <table_name>_id and is filled from a sequence).
    With forms 6.0.5.33 the corresponding form Field can be set to required=yes, when the form is run the field is null until the record is saved to the database.
    With forms 6.0.8.11.3 ( and 6.0.8.20.1) a field must be entered error is displayed when the save logic is processed (at the time the POST is called).
    Any help would be appreciated
    Geoff Coleman
    [email protected]

    The change is not in POST but rather in the way that the Required property on an item is handled. It is now applied even if the item in question is not visible (e.g. is on a null canvas)
    This was always the way it was supposed to work, but the bug was only recently fixed.
    Unfortunately the code that Designer generated incorrectly created non-displayed items with this required property set which meant that such forms are now broken by the bugfix.
    Support can supply you with a Forms API program which will fix up your Forms to un-set the required attribute for non-visible items (quote Bug 1228761)

  • IP VRF to VRF Definition Import-Map behaviour changes

    Have the import rules changed from IP VRF syntax (IPV4 only) to VRF Definitions (IPV4&6)?
    The issue being we have a management VRF which is used for access, monitoring, archiving.  which works well in the IP vrf sytnax example:   
    ip vrf A-IPVPN
     rd 9282:1002
     import map Customer-Mgmt-Infrastructure
     route-target export 9282:1002
     route-target import 9282:1002
     route-target import 9282:1999
    ip vrf Customer-Mgmt
     rd 9282:1999
     import map Import-Customer-Mgmt
     route-target export 9282:1999
     route-target import 9282:1999
     route-target import 9282:2010
     route-target import 9282:1002
     route-target import 9282:2011
     route-target import 9282:1005
    route-map Import-Customer-Mgmt permit 10
     match ip address prefix-list Customer-Mgmt-CPE
    ip prefix-list Customer-Mgmt-CPE: 2 entries
       seq 5 deny 169.254.254.0/24
       seq 10 permit 169.254.0.0/16 le 32
    This allows all PE's to learn Customers Routes and import and export management details, I believe I have followed best practice and the result is what I would expect, however since creating some new customers with the vrf definition syntax it appears that the Import-Customer-Mgmt now filters out BGP routes within the Local VRF PE-PE, however the the routes are visible via :
    show ip bgp vpnv4 rd  
     but not imported into BGP table.
    Vrf definition 
    rf definition S-C-IPVPN
     rd 9282:1005
     route-target export 9282:1005
     route-target import 9282:1005
     route-target import 9282:1999
     address-family ipv4
      import map Customer-Mgmt-Infrastructure
     exit-address-family
    After hitting my head against a wall for longer than I would like to admit, I removed the import map and routes in the RD are installed into the BGP Table?
    My question is, is this now default behaviour or is it a bug in our particular version (asr1002x-universalk9.03.09.01.S.153-2.S1.SPA.bin)
    I had been considering upgrading our syntax using the vrf upgrade-cli, glad i didnt as this would have caused a major outage as we use the a fair amount of import maps with our Internet transit circuits.
    If this is normal behaviour what it the best way to match and permit Local vrf RD? baring in mind I would like ideally to reuse the same route-map.    
    I will continue to investigate,  but if anyone has had experience of this behaviour I would appropriate there input 
    Regard Neil 

    The following route map has no impact:
    route-map Customer-Mgmt-Infrastructure-2 permit, sequence 10
      Match clauses:
        community (community-list filter): S-C-IPVPN
      Set clauses:
      Policy routing matches: 0 packets, 0 bytes
    Named Community expanded list S-C-IPVPN
        permit RT:9282:1005
    Think i will need to lab up.
    Neil

  • 7.1.1 Display Level - behaviour changed and doesn't function as in manual

    In the matrix editor, the move up display level (the little close box to the left of the local "edit" menu) used to behave as described in the reference guide, on page 22:
    Display Levels in the Editors
    In the Matrix and Hyper Editors, this step up the display hierarchy means that you will then see an Arrange window. In this scenario, a change to the lowest display hierarchy level will cause the relevant editor to reappear. At the Arrange level, you will see the local menus of the Matrix or Score Editors, which also contain all of the Arrange window functions. Double-clicking on a MIDI Region opens the Hyper, Matrix, Score Editor, or Event List display of the contents of the MIDI Region. Use of this facility, in conjunction with the UDL button allows you to quickly switch to another MIDI Region, and edit its contents. "
    With a region selected for editing, clicking it once moves up a display level, showing all the notes in all sequences. Clicking it again should turn the matrix window into a pseudo arrange window - you can then select a different region to go back down into the matrix editor again.
    But this doesn't work for me anymore, in 7.1.1, and I cannot find any mention that this behaviour has changed in the readmes etc.
    Did they change this behaviour? Is it a bug? Does the same thing happen to you or is it just me?
    Many thanks

    Note: This works correctly in the hyper editor, it goes back to a psuedo arrange page. But the matrix window does not...

  • Menu click behaviour changed

    Sometime over the last couple of months I noticed that on occassion when I click an item on the menu a drop down box appears and immediately disappears. I need to click the menu item a second time in order to keep the drop down on the screen so that I can select an item on the menu drop down. I don't know when this problem started, at first I ignored it. Now its getting annoying. I've tried all the usual stuff like reinstalling the latest combo updates, fixed permissions etc and nothing has helped. I thought it might be the Magic mouse driver, so I switch to an old Mighty Mouse and still there was no improvement. Nothing else seems to be affected. I know you might think I'm being fussy. Imagine when you are trying to do something in a hurry and you need to click on menu more than once to do something, it gets very annoying. Anyway, this is not intended behaviour for a mouse action. Any suggestions?
    Stephen

    I am trying to avoid reinstalling Snow Leopard and all my other applications and their updates. You can imagine it will take quite sometime, not to mention all the documents and other data that I'll need to recover.
    Actually, with Snow Leopard, Apple introduced an new default approach to installations from the DVD. Using the DVD, the installer ONLY replaces the system software, leaving your user data and applications untouched. Only takes 35-45 minutes to reinstall the OS and all your data and apps are preserved.
    When necessary, now there is no excuse for reinstalling the OS to see if it resolves an issue.
    You say you tried it with another mouse. What kind? USB? Bluetooth? I'm more inclined to believe the issue is with the mouse or with user behavior than I am with the OS. If you have been seeing this behavior with wireless mice, try using a corded mouse to see if the behavior changes.
    As stated earlier, the only reason this could be occurring is that the OS is detecting 2 clicks instead of one when you select a menu. You are only clicking once - right?

  • DBWR Behaviour Change

    Hi,
    We are currently experiencing a change in behavior on our performance testing environment with regards to the size of the writes issued by the database writer.
    In our case we have a table partition which resides on its own LUN and is 100% inserts (no indexexs, constraints etc). Previously the writes to this LUN were approximately 100K as observed in IOSTAT and dba_hist_filestatxs database table. The I/O profile using Sun Storage Performance Analyser (SWAT) also reflects this and indicated the majority of I/O on this LUN is sequential.
    We have always had 8K writes to another LUN which does more random I/O.
    We are now seeing that the writes sizes have now dropped to approximately 16K indicating that the dbwr is not coaelesing the writes in the same way and the I/O profile appears more random.
    Does anybody have any feel for what may influence this behaviour?.
    The one change that we are seeing at the I/O level is that our I/O service time has increased, so I am wondering if the DBWR process would change its behaviour if it detected longer response times for db_file_parallel_write events.
    Any theories would be appreciated.
    Environment
    Database Version : 10.2.0.2
    OS Version: Solaris 10 SPARC64
    CPU Cores : 8
    Database Writers: 2
    Thanks and Regards
    Adrian

    Hi,
    We are now seeing that the writes sizes have now dropped to approximately 16K indicating that the dbwr is not coaelesing the writes in the same way and the I/O profile appears more random.Do you have problems with that?
    Why are you looking at the DBWR performance rather than end-user's?
    What is your db_cache_size?
    Are you using async IO?
    Are you aware of any changes made to the environment?
    and, of course, why not 10.2.0.4?

  • [solved] Did rsync behaviour change? (= nope, not here at least )

    Hi!
    I just used one of my "full system backup" hd's and the backup script on it deleted all the backups and itself (still made a current backup, just the incremental ones are gone now) -.-
    I didn't use that script from the hd for a while (didn't have changes in big files for almost a year, so I only pushed my home folder to the router hd every day), but back when I was working with big files more, I used it every second day for several years and it always worked reliably.
    That being said: My guess would be, that it deleted the "excluded=" folders, which is shouldn't (?) or didn't use to with --delete (the hd root was a clone of my system root, except for one excluded /BACKUP directory that kept the increments from --backup). Can't see any other way.
    Did anyone notice a change in rsync behaviour? Anything strange?
    Couldn't find anything anything on the net that fits... but I almost remember having fixed something rsync-related somewhere... some time ago... not sure what/if/why though, maybe just déjà-vue...
    I have some more distributed rsync scripts and might have to check after a lot of things if there's really been a change in rsync behaviour...
    ( Please excuse the general fuzzyness of this question/topic... I'm in the middle of one of those weird fits of "I should have made a backup before I started that backup now I need to check EVERYTHING and the server too and omg when was the last time I even ssh'd into the router and looked at the cronjobs!?" )
    Last edited by whoops (2015-01-31 09:55:19)

    \hbar wrote:I don't get it: I see a hyphen before the options. Was it edited?
    yes, sorry,  it probably wasn't clear...:
    whoops wrote:Oh, thanks, that was a hasty copy paste error (was scared of the --delete ), fixed it now.
    ... what exactly I "fixed" there. Should have re-posted the line instead of editing the old post .
    \hbar wrote:Anyways, it seems to me that */BACKUP/* would not exclude /BACKUP : the /* files you are sync-ing get expanded into their names, one of which would be 'BACKUP' which does not (and should not) match '*/BACKUP/*'
    Hmmm, right, "/BACKUP" should definitely be in there too just to be safe, thanks!
    There also shouldn't have been a /BACKUP in the first place, though. Here's a more recent version of the whole script that I found on a different backup drive in the meantime (the one that caused the trouble should be almost identical):
    #!/bin/bash
    # check if root
    [ -d /BACKUP ] && { echo "TRYING TO RUN SCRIPT ON WRONG DRIVE?"; exit 1; }
    [ -f /backupthis ] || { echo "NOPE! This drive does not want to be backed up ('/backupthis' missing)"; exit 1; }
    # find backup drive marker file
    while read mount;
    do
    if [ -a "$mount/BACKUP/backupdriveisabove" ];
    then echo $mount;
    backup_drive=$mount;
    fi;
    done < <(mount | grep /media | sed "s/.*on \(.*\) type.*/\1/")
    # check if marker file is fine - should match:
    # directory="/media/BACKUPMOUNT"; blkid $(mount | grep "$directory") | sed 's/.*: //' > $directory/BACKUP/backupdriveisabove
    grep "$(blkid $(mount | grep $backup_drive) | sed 's/.*: //')" "$backup_drive/BACKUP/backupdriveisabove" \
    || { echo "$backup_drive is not a valid backup drive." ; exit 1; }
    echo -e "\E[32m===== Backup drive found in $backup_drive =====" && tput sgr0
    # Help! I'm scared!
    for countdown in `seq 0 5`; do echo "death and destruction in `expr 5 - $countdown`"; sleep 1; done;
    # Rsync stuff of which incremental backups should be kept in ./BACKUP/$(date +%Y-%m-%d) first
    echo -e "\E[32m===== Partial incremental backup =====" && tput sgr0
    synctime=1337
    while (( $synctime > 15 ))
    do statime=$(date +%s);
    rsync -aAXvxbiH /* \
    $backup_drive --backup-dir=$backup_drive/BACKUP/$(date +%Y-%m-%d) \
    --log-file="$backup_drive/BACKUP/$(date +%Y-%m-%d).log" \
    --exclude-from="/opt/scripts/backup.exclude" \
    --delete --fuzzy \
    || { echo 'incremental backup failed' ; exit 1; }
    eval synctime=`expr $(date +%s) - $statime`
    if (($synctime > 14)); then echo "rsync took too long ($synctime) - repeating"; fi;
    done;
    # Rsync (almost all) previously excluded files / nonincremental full backup to backup drive root
    echo -e "\E[32m===== Full system backup =====" && tput sgr0
    rsync aAXv /* "$backup_drive" --exclude={share/Trash/*,/dev/*,/proc/*,/sys/*,/tmp/*,/tmp/.*,/run/*,/mnt/*,/media/*,/lost+found,/home/*/.gvfs,*/BACKUP/*,/home/*/.ccache/*,/home/*/.thumbnails/*,/var/tmp/*,/home/*/.mozilla/firefox/*.default/Cache/*,/home/*/.mozilla/firefox/*.default/thumbnails/*,/home/*/.pulse/*,/home/_*,*.part} \
    || { echo 'full backup failed' ; exit 1; }
    # Try to merge two old backup increments without making too big a gap (if there are more than 60 backups)
    echo -e "\E[32m===== Trying to merge 2 backup increments =====" && tput sgr0
    minnamecur=0;
    treshold=$(ls -d $backup_drive/BACKUP/2???-??-?? | wc -l);
    if (($treshold > 60))
    then {
    treshold=`expr $treshold / 3`
    # Do not create gaps between incremental backups that would be bigger than this many seconds
    mingap=$[60*60*24*30]
    pushd "$backup_drive/BACKUP" || { echo 'incremental backup folder not found' ; exit 1; }
    c=0
    # find backup that would leave the smallest gap
    for dir in 2???-??-??;
    do c="`expr $c + 1`";
    dirsec=$(date -d "$dir 00:00" +%s)
    eval name_$c=$dirsec;
    if (($c > 4)) && (($mingap > 86399));
    then {
    cpre="`expr $c - 4`";
    ccur="`expr $c - 3`";
    cnex="`expr $c - 2`";
    eval namecur=\$name_$ccur;
    eval namepre=\$name_$cpre;
    eval namenex=\$name_$cnex;
    gap=`expr $namenex - $namepre`;
    if (("$gap" < "$mingap"))
    then {
    mingap="$gap"
    export minnamecur="$namecur"
    export minnamenex="$namenex"
    daysgap=`expr $mingap / 86400`
    } fi;
    # reduce acceptable gap size for newer backup increments depending on amount of backups
    gapreduce="`expr $mingap / $treshold`";
    mingap="`expr $mingap - $gapreduce`";
    # echo "DEBUG: $c -- Folder $namecur -- acceptable gap size reduced to: $daysgap days";
    } fi;
    done;
    # merge 2 backup increments / remove the old one.
    if (( $minnamecur > 2000 ))
    then {
    eval rmback=$backup_drive/BACKUP/$(date -d "@$minnamecur" +"%Y-%m-%d");
    eval toback=$backup_drive/BACKUP/$(date -d "@$minnamenex" +"%Y-%m-%d");
    cp -vlrTf $toback $rmback &&\
    rm $toback -rv &&\
    mv -v $rmback $toback &&\
    echo -e "\E[32m===== moved $rmback to $toback, resulting gap: $daysgap days =====" && tput sgr0
    } else echo "Could not find Folder to delete. Maybe gaps between backups are too big already."
    fi;
    } fi;
    popd
    # install grub to backup-drive
    echo grub-install $(mount | grep $(stat -c%m "$backup_drive") | sed "s/[0-9].*//")
    # unmount the backup drive :3
    umount $backup_drive -v \
    || { echo "Something might have gone horribly wrong but now it's too late to do anything about it :D" ; exit 1; }
    From what I can tell, the only way this can has happened is if somehow /BACKUP got created while the script was already running? Or is there another way that I can't see?
    Have to retrace my steps...
    1) Wanted to make a backup of my project in kdevelop with the builtin git function. When I pressed the "git button" in kdevelop, it crashed and all source files of the project were deleted.
    2) temporarily killed git & other backup solutions. Tried to restore files with git manually, but there were no deleted files registered / all commits were empty for some reason.
    3) I connected a SPARE_HD and dumped /dev/sda2 (my swap) to it (used a lot of ram so I hoped the source files were in there)
    4) Connected OLD_BACKUP_HD in addition to SPARE_HD and started backup script ("just in case")
    5) used extundelete to recover everything deleted withing the last 2 hours to the SPARE_HD
    6) Started '# strings /sda | grep -n600 "Text that I luckily had in the first line of all my lost files" > /SPARE_HD/big_text-file'
    7) Started rummaging trough the stuff that extundelete recovered on the SPARE_HD to see if anything's missing (only 2 of the deleted files were unrecoverable)
    8) Started rummaging trough /SPARE_HD/big_text-file with grep, found the last 2 files I was missing in there (HUZZAH!)
    9) Looked at my OLD_BACKUP_HD and saw it had deleted its own incremental BACKUP folder plus itself and its logs...
    Hmmm, I can totally not see myself creating /BACKUP there somewhere between step 4 and 9 because during that time I tried to avoid writing ANYTHING to /dev/sda, but I don't see another way this could have happened, so I guess I must have done it anyway -.-
    *sigh* Maybe it's time to retire that thing... even though it had a pretty good run (several years on several mostly unsupervised machines including a webserver and a router - and no problems)...
    And HOW is it possible that EVERY SINGLE time I loose files it's because something goes wrong with an attempt to make a BACKUP?
    The only reason why I even need backups seems to be that I keep deleting files while trying to make backups (even if for a change I don't use any messy self-written scripts) and then I end up doing "good old messy file recovery" instead anyways -.-
    Last edited by whoops (2015-01-31 08:32:59)

  • Javascript Behaviour changed in 3.1 or latest security update?

    Since letting software update install everything on my Macbook Air, several sites have started to misbehave in Safari. Specifically, Javascript launched new windows are not happening.
    An example is at http://www.telegraph.co.uk where the "Matt" and "Alex" links on the right do not do anything in Safari on a fully updated 10.5.2 system. They work in Firefox on the same machine. They also worked before the update on the same machine. The links also work properly in 10.5.2 on my MacPro which has Safari 3.0.4 and does not yet have the security update installed. Equally, they work properly on a Tiger system which is also fully updated.
    Since I think I installed 3.1 and the security update at the same time, I am not sure which one broke the behaviour.
    Is there a workaround I need to apply to get javascript back?
    An example script that fails on the example site is:
    javascript:MattWindow('/core/Matt/pMattTemplate.jhtml')
    Edit: Popups are enabled in Safari prefs and so is Javascript.
    Message was edited by: David Greenhalgh

    David, It could be a file(s) in your InputManagers folders and/or a SIMBL file.
    Try the following:
    Quit Safari.
    Drag the 'InputManagers' folders out of the Libraries.
    HD/Library/InputManagers
    HD/Users/yourhome/Library/InputManagers <- +(if you have one here)+
    If you have a 'SIMBL' folder in the 'Application Support' folders move it out.
    HD/Users/yourhome/Library/Application Support/SIMBL
    HD/Library/Application Support/SIMBL
    Good luck,
    Tom
    Message was edited by: Tom_X

  • Server Behaviours changed in DW 8?

    Just watching an old movie created using DWMX, and it shows
    the Server
    Behaviour Panel having a "Go to Detail" and "Go to Related"
    page behaviours
    listed.
    I don't seem to have these options in DW 8. Have they simply
    been replaced
    by the fancy 'Insert - Application Object - Master Detail
    Page Set' - or am
    I missing something? (for that matter, why doesn't this show
    in the Server
    Behaviours panel drop down?)
    Brendon

    Grr.
    Thanks Murray.
    Brendon
    "Murray *ACE*" <[email protected]> wrote
    in message
    news:e7jbth$aem$[email protected]..
    >> Have they simply been replaced by the fancy 'Insert
    - Application
    >> Object - Master Detail Page Set'
    >
    > Yes that appears to be the case....
    >
    > --
    > Murray --- ICQ 71997575
    > Adobe Community Expert
    > (If you *MUST* email me, don't LAUGH when you do so!)
    > ==================
    >
    http://www.dreamweavermx-templates.com
    - Template Triage!
    >
    http://www.projectseven.com/go
    - DW FAQs, Tutorials & Resources
    >
    http://www.dwfaq.com - DW FAQs,
    Tutorials & Resources
    >
    http://www.macromedia.com/support/search/
    - Macromedia (MM) Technotes
    > ==================
    >
    >
    > "Brendon" <[email protected]> wrote in message
    > news:e7i658$s6j$[email protected]..
    >> Just watching an old movie created using DWMX, and
    it shows the Server
    >> Behaviour Panel having a "Go to Detail" and "Go to
    Related" page
    >> behaviours listed.
    >>
    >> I don't seem to have these options in DW 8. Have
    they simply been
    >> replaced by the fancy 'Insert - Application Object -
    Master Detail Page
    >> Set' - or am I missing something? (for that matter,
    why doesn't this show
    >> in the Server Behaviours panel drop down?)
    >>
    >> Brendon
    >>
    >
    >

  • Flex 3/4 Initialization Timing Changes?

    Hi there.  This isn't end-of-the-world critical, but I'm trying to better understand what's going on here.
    We have a relatively large and complex business app (several hundred classes split into a core application framework plus multiple dynamically-loaded product modules), originally written in Flex 3.2.  As part of our new release, we are taking the chance to port it over to Flex 4.5; I've spent the past week doing so.  We're continuing to use the Halo theme for the time being, to minimize the initial disruption.
    I've pretty much gotten it working, but I'm still trying to understand the key behaviour change that I've observed.  A lot of our code is failing at initialization time.  I'm still nailing down exactly what's different, but the most obvious bit is that it seems like, in commitProperties(), properties that refer to other objects in the same parent don't exist yet, where they previously did.  So I'm getting null pointer exceptions where I wasn't previously.
    Similarly, I have a few objects where the property refers to the object itself.  They're graphing objects, which have a property that describes how to set up the dataFunction.  This property is typically set to one of several functions available on the object.  So graph "foo" would set this function to "foo.useDateForY" or some such, using one of these utility functions on the class that says how to set up the graph.  This previously worked, but now the property is failing because "foo" is null when it is trying to resolve the property value.  This seems likely to be related to the above.
    Finally, I've also observed what appear to be differences in the resolution of the lifecycle for non-visible objects.  Again, I haven't had time to nail this down precisely, but it feels like I used to get CREATION_COMPLETE in a bunch of cases that aren't getting it any more -- specifically, this seems to be happening for objects that I'm building up before adding them to the visible graph.  I build lots of screens upfront when a product is loaded, but don't actually add them into the tab structure until they are invoked.  Similarly, we have popups that we want to be populating as we go, long before they are displayed, but I don't seem to be getting CREATION_COMPLETE for them when I used to.
    So it seems like something has changed subtly in the UIComponent lifecycle, having to do with the timing of resolving IDs, setting properties to them, and calling commitProperties(); maybe also something having to do with how parenting relates to finishing the lifecycle.  Does anybody have a clue what I'm talking about here?  And is there a document somewhere that describes the timing changes more clearly?  I've seen several documents talking about 3 -> 4 changes, but haven't noticed anything on this particular topic...

    Yes, the timing of when child objects are created has changed relative to
    when the constructor runs, but the lifecycle methods and events have not
    changed.  If you rely on commitProperties to resolve things, it should be
    ok.
    The only cases I've seen where creationComplete doesn't fire is if there is
    an invalidation loop going on which is slightly more possible if using
    multiline text controls whose height depends on width or vice-versa.

  • Is there a flowchart describing gc behaviour of the Sun hotspot jvm?

    Given that garbage collector tuning is such an important part of application deployment it is surprising that the behaviour of the garbage collector implemented in the Sun JVM is so poorly documented.
    Our system is displaying behaviour changes that we don't expect and do not want and it we can't find enough information to predict how the garbage collector will behave under expected and changing conditions. This makes it impossible to select sensible parameters for our system.
    For example, we have included the -Xincgc option when running a weblogic6.0 server to reduce the size of the gc pauses. Monitoring the memory (both on the weblogic console and via -verbose:gc) we have small gcs happening quite frequently but after a couple of hours it switches over to full gcs and stays that way for ever after. The full gcs bring on the longer delays for garbage collection (typically around 5 seconds) every 5 minutes or so.
    Incidentally, we assume the small gcs are the incremental gcs performed on the old area but there is no way to distinguish those from the little scavenging gcs that are performed even without -Xincgc.
    The total memory used is quite modest (1/4 to 1/2 the size generally) compared to the max heap size and the old area and comparable to the new area.
    If there is a flowchart or uml activity diagram that describes the hotspot gc behaviour so that we could be a little more deterministic in our approach to gc tuning I would be most grateful to get access to it. This trial and error approach is very frustrating.
    There's some very useful information out there about the structure of the java heap and the meaning of the various options and even the garbage collection algorithms themselves but it is not sufficient to specify the behaviour of the specific hotspot jvm from Sun Microsystems.
    I liken it to having a class diagram describing a highly dynamic system but no interaction diagrams.

    I would also love to have a comprehensive explanation of garbage collection in Java. I'm still mystified by it in some respects.
    The author of this thread has obviously researched Java GC... don't know if this helps, but someone in another thread posted this link to a JDC Tech Tips issue concerning memory allocation and GC:
    http://developer.java.sun.com/developer/TechTips/2000/tt1222.html
    Also the links near the bottom may be worth checking out...
    There's something in that web page that I still don't understand and I think I will post a message about it soon.

Maybe you are looking for

  • TS2972 Computer not showing up in Apple TV... done everything!

    Despite doing everything listed to get home sharing to work (turning everything on/off, restarting router, checking internet connection, turning off firewall) NOTHING has worked to make my computer show up in the Apple TV! Anyone have the same proble

  • HR payslip - Bold to be removed

    Hi ,   I have developed HR payslips using transaction PE51 to print HR payslips. Now all the data & the alignment is fine. Only problem iam facing is its printing everything in BOLD. Its printing think & Bold. How do i remove the bold. Because there

  • Problem with the internet in E90

    Hi, I have Nokia Communicator E90,, and my problem with browsing and using the internet :: The problem is :: When i start using the internet the 1st website will open then I cannot open anther website until i disconnected and reconnect again ! I trie

  • Parameters with comments

    Hi, I want to create a check box with some long text and also need to do validations based on the one of the input fld. for example there is a fld in the selection screen, paramaeters: xyz type c (will accept y or n) only. here the checkbox should co

  • Event to listen for nearID change

    Hello, it appears that Stratus NetConnection nearID changes pretty frequently for me while connected to Stratus (at least lately). I couldn't find where I can be notified when nearID changes. Could you help? Thanks in advance. -Kirill