CardDAV bug in OS 10.9 Server

OS 10.9 server has a nasty bug in its CardDAV section.
Hardware used: MacMini Server Late 2012, MacBook Pro Mid 2010, iPhone 4S 2012.
Software used: OS 10.9 (MacMini and MacBook), iOS 7.0.4 (iPhone). MacMini has the Server App.
You can replicate it as follows.
0. We assume the server has basic configuration in place (IP, DNS name "server-something.local", and self-signed certificate) and the OSX firewall is down.
1. Server (the App) -> Services -> Contacts: activate the service
2. Server (the App) -> Services -> Web: activate the service then click at the bottom of the window to see the server's web site.
3. Web site -> log on the profile manager -> Groups -> All users -> Settings -> Contacts: It says "port 8843".
4. "" -> Users -> select yourself -> Settings -> Contacts: give yourself an account. It says "port 8443"
Note the port numbers are different.
5. iPhone: Install "Fing" from the App Store.
6. Fing -> scan your LAN.
7. select your server's IP
8. scroll down the page and select "Scan services"
9. Note (in Fing) that the server does not have any port 8843 open, that is, the CardDAV server is not running on that port.
From the client side,
10. iOS (iPhone) -> Settings -> Mail, Contacts and Calendars -> Add Account -> Add CardDAV Account -> enter 'server-somethin.local', unix user-name, and password (as specified at point 4 above).  It will add the account. Enter its item, select "Advanced Settings". It reads "port 8443".
11. open Contacts (iOS), close all accounts except the new server account, play with it, and see that it works as expected.
12. OSX (MacBook) -> System Preferences -> "Internet Accounts -> Add Other Account -> Add an OSX server account -> select your server from the list and fill in with the unix user-name and password. It will add the account, and in particular the "Contacts" account. If you click on "details", it is not possible to change port numbers.
13. open Contacts (OSX), close all accounts except the new server account, play with it, and see that IT DOES NOT WORK.
Summing up,
- If you change the port number to 8843 at step 4 above, it will not do any good, because 8843 is down.
- Changing the CardDAV server port to 8443 is not possible from the Server App.
- The CardDAV port is already 8443, because iOS can see it (steps 10--11 above).
- For Contacts under OSX to work, one would have to make sure it is talking to port 8843, but the configuration panel does not allow for it.
Next.
14. OSX (MacBook) -> System Preferences -> "Internet Accounts -> Add Other Account -> Add a CardDAV account -> fill in with the unix user-name, password, and "sever-something.local". It will NOT add a CardDAV account. Instead, it will add an OSX server account, as in step 12 above.
To avoid this post being erased by admin as "feedback only", we ask the following questions:
1. Why Apple did not find and fix this before releasing OS Server 10.9?
2. How do we fix this problem?

Following my original post:
3. Web site -> log on the profile manager -> Groups -> All users -> Settings -> Contacts: It says "port 8843".
4. "" -> Users -> select yourself -> Settings -> Contacts: give yourself an account. It says "port 8443"
Now (osx 10.9.1, server.app 3.0.1) the above step 4 says "port 8843", which finally resolves the conflict with CalDAV's own port. I deleted the user profile for CalDAV and kept the group profile. When using Contacts from the client (laptop), I can see the laptop connecting to the server on port TCP:8843.
The iPhone still wants to connect to CardDAV on port TCP:8443, however. In the settings, tapping on the port number allows changing its content, so I changed it to 8843 and it liked it.
Does it work?
With the three clients up, I created a new contact in each one; after minutes, they still did not synchronise.

Similar Messages

  • (BUG??) Team Foundation Server 2013 Update 4 Support for SQL 2014

    Scenario: On Windows 2012 R2 x64 I have installed SQL, Reporting Services & AS 2014 (applied all updates through Windows Update), SharePoint 2013 Foundation, and installed Team Foundation Server 2013 RTM (did not configure).
    Before configuring TFS, I applied Update 4 to the server (not update 1,2 or 3 directly). When trying to configure I get the following error:
    ==========================
    TF255146: Team Foundation Server requires SQL Server 2012 SP1 (11.00.3000) or greater. The SQL Server instance you supplied is version 12.0.2254.0.
    TF400403: The Report Server instance specified is version 12.0.2254.0, the minimum supported version is 11.0.3000.
    TF400070: A required version of a component is not installed on the application tier. You must exit the Team Foundation Administration Console and install a supported version of either SQL Server Analysis Services of the SQL Server Client Tools on the application
    tier to ensure that the Analysis Services object model is present for warehouse processing.
    ====================================
    My understanding is that as of Update 2 (or was it 3) that SQL 2014 was supported. However it appears if that particular update is not installed directly it bypasses the check. Can someone verify I am seeing the intended result or is this a bug?

    Hi Jazzy,  
    Thanks for your post.
    And thank you for sharing your experience here. It will be very beneficial for other community members having the similar questions.
    All your participation and support are very important to build such harmonious/ pleasant / learning environment for MSDN community.
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • [Bug?] DSC - Registering for Server Events

    [LV2009, Win 7 Pro]
    Howdy (Ben S another one for you? )
    I am trying to register for alarm events.
    This works fine by using the Register for Shared Variable Events method.
    Since I have a lot of SVs and a dedicated server, I thought listen to the server would be much easier to implement.
    The LabVIEW Help implies that using the Request System Event Notifications.vi will do the trick:
    However, I can't seem to Register for Server Events - it doesn't work.
    The code seems quite simple (below) - I just want to listen to all SVs on the localhost (default).
    All the Events are coming through to the DSC (viewed in MAX), however, using the Server method I don't get any notifications through the Event Structure.
    Am I doing something wrong? - Or is this a bug?
    Is there somewhere I need to configure stuff?
    Or does it just not send me Alarm and Notification Events of Shared Variables?
    What does it send me then?
    I couldn't even get the Example in NI Example Finder to work (labview\examples\lvdsc\Event Structure Support).
    It shows the SV Registered Event but not the Server/System Event.
    Attached is a simple project I used to demonstrate this - a SV configured for a HiHi Alarm:
    The work around is to list out all Shared Variables and just read that from disk and load it into the application etc...
    However, it is much more desirable to subscribe to the server and get all SV Events - is this possible?
    Cheers
    -JG
    Certified LabVIEW Architect * LabVIEW Champion
    Attachments:
    DSC Server Events [LV2009].zip ‏20 KB

    Ravens Fan wrote:
    I think if you put in the fully qualified name for the network path and process and library on the other machine, you should be able to get the list.
    You'll have to try it and find out.
    Cheers for posting
    From the help - I didn't think this was possible?
    I gave it a go using the following naming 
    But I get the following error (I tried to connect to multiple SVE across our network and got the same error).
    I can use  though and it returns the local SV List, so I think I have the naming correct?
    <edit>
    My other thought is to build a hook in to the app and invoke this across the network using VI Server, so that it can run the VI on the localhost then report back the SV List).
    </edit>
    Certified LabVIEW Architect * LabVIEW Champion

  • Who got caldav and carddav push running successful on lion server?

    Hello everybody,
    I am running OS X Server since 10.5 server which I upgraded to 10.6 server in the past without problems, upgrading 10.6 to 10.7 was is a mess and none of my serveral upgrades attempts were leading into a "stable" lion server so I ended up with manual upgrading my data into a clean install of lion server.
    After data migration most of my services are working as usual (some profilemanager quirks but that don't bother me for now).
    Only when it comes to push for calendar and addressbook services I am getting frustrated!
    Push mail is running fine for my iDevices (Push is listed on my iPhone and iPad and Mac) but for caldav and carddav my caldavd error.logs are throwing error messages like this if I access iCal or addressbook via Mac or iDevice (no Push available on iPhone or iPad neither on my Mac).
    Those for caldav
    2011-09-23 11:29:18+0200 [-] [caldav-0]  [-] [twistedcaldav.notify.Notifier#warn] Could not create node /CalDAV/myserver.fqdn/FEF253F7-9A6C-4242-A990-88960832BF5F/
    2011-09-23 11:29:18+0200 [-] [notifications] 2011-09-23 11:29:18+0200 [XmlStream,client] [twistedcaldav.notify.XMPPNotifier#error] PubSub node configuration error: <error code='403' type='auth'><forbidden xmlns='urn:ietf:params:xml:ns:xmpp-stanzas'/></error>
    2011-09-23 11:29:18+0200 [-] [notifications] 2011-09-23 11:29:18+0200 [XmlStream,client] [twistedcaldav.notify.XMPPNotifier#error] PubSub node configuration error: <error code='403' type='auth'><forbidden xmlns='urn:ietf:params:xml:ns:xmpp-stanzas'/></error>
    and those for carddav
    2011-09-23 11:24:35+0200 [-] [caldav-1]  [-] [twistedcaldav.notify.Notifier#warn] Could not create node /CardDAV/myserver.fqdn/FEF253F7-9A6C-4242-A990-88960832BF5F/
    2011-09-23 11:24:35+0200 [-] [notifications] 2011-09-23 11:24:35+0200 [XmlStream,client] [twistedcaldav.notify.XMPPNotifier#error] PubSub node configuration error: <error code='403' type='auth'><forbidden xmlns='urn:ietf:params:xml:ns:xmpp-stanzas'/></error>
    2011-09-23 11:24:35+0200 [-] [notifications] 2011-09-23 11:24:35+0200 [XmlStream,client] [twistedcaldav.notify.XMPPNotifier#error] PubSub node configuration error: <error code='403' type='auth'><forbidden xmlns='urn:ietf:params:xml:ns:xmpp-stanzas'/></error>
    I tried several thing (changing serveradmin settings, plist and other configuration files, checking passwords, renew apple push certificates etc.) but that changed nothing, so if one got those push services running correctly I would appreciate if one could share his configuration so I can compare it with mine.
    I think those files and output are interesting for the configuration, if some other files are important please note them.
    If one feel afraid to post his configuration for security reasons etc. please ask for my email address
    Output of terminal command:
    sudo serveradmin settings calendar
    sudo serveradmin settings addressbook
    sudo serveradmin settings notification
    sudo serveradmin jabber
    plist and other configuration files of interest
    /etc/caldavd/caldavd.plist
    /etc/jabberd/*
    /etc/jabberd_notification/*
    Best regards,
    Eldrik

    This error...  <error code='403' type='auth'><forbidden xmlns='urn:ietf:params:xml:ns:xmpp-stanzas'/></error> ...looks like the XMPP notification server is not allowing the calendar server's XMPP account (or "JID") to create/configure pubsub nodes.  The XMPP notification server keeps a list of privileged JIDs in /Library/Preferences/com.apple.NotificationServer.plist.
    First, check to see that calendar server is configured to use the "com.apple.notificationuser@<your hostname>" account by running this command in terminal:
    sudo serveradmin settings calendar:Notifications:Services:XMPPNotifier:JID
    You should see something like:
    calendar:Notifications:Services:XMPPNotifier:JID = "[email protected]"
    Next, see who is in the notification server's "privileged users" list via:
    serveradmin settings notification:privilegedUsers
    You should see something like:
    notification:privilegedUsers:_array_index:0 = "_notification_user"
    notification:privilegedUsers:_array_index:1 = "com.apple.notificationuser"
    If you don't see "com.apple.notificationuser", you'll need to edit /Library/Preferences/com.apple.NotificationServer.plist and add it to the "privilegedUsers" array like:
      <key>privilegedUsers</key>
              <array>
         <string>_notification_user</string>
         <string>com.apple.notificationuser</string>
              </array>
    Then I would recommend rebooting the server so that you are sure the notification server is restarted and re-reads the plist.

  • Vexing 10.5.6 file system bugs causing crashes; wose with Server OS 10.5.x

    We continue to experience occasional MacOS 10.5.6 data volume instabilities. Just today, the system reported that it could not repair a purely archival data volume at a time when I was not accessing the volume and had not asked the OS to 'repair' it.
    Then when manually running a repair on that same data volume in Disk Utility, storage services caused a kernel panic crash.
    This problem was more prevalent when I tried the server version of Mac OS 10.5.6, where the OS caused disk volume corruption on the dedicated TimeMachine backup drive, and then started to crash frequently in the face of that self-induced corruption.
    There was no disk volume corruption and no crashes before 10.5.6.
    I would like to know why there should never be a file system condition on data volumes that would induce the entire OS to crash. I would completely understand a forced dismount of the corrupted volumes perhaps, or continuing in a read-only mode until the impacted volume could be repaird, BUT NOT AN OS CRASH. Even attempting to use the repair function on the unmounted volume would cause a crash.
    The reason I know this is a file system bug and not the hardware is that the same system can endlessly zero out the data on the same drives, copy large test files to & from the drives, etc. The only situation where these file system crashes happen is with high file counts as one would encounter on a backup drive or on a data archive drive.

    This evening OS 10.5.6 reported that Time Machine could not write to its target backup volume on one of the new 1.5TB drives...the nightmare cycle of
    1. OS 10.5.6 causes volume corruption
    2. OS 10.5.6 unable to repair volume corruption, crashing instead
    3. OS 10.5.6 crashes over and over as it tries to auto-mount the volume
    4. hard drive has to be physically ejected and wiped using a WINDOWS system so that it can be used again!
    This is a sickening tale of disk corruption woe that started with the OS 10.5.6 update.
    As fast as I opened an AppleCare support case on this issue, they were closing it behind my back without resolving the issue.
    The kernel panics related to the volume mount problem are simply outragrous:
    "jnl: mod block start: buffsize 512 not a multiple of block size 4096\n"
    KERNEL PANICS SHOULD NEVER HAPPEN DUE TO A VOLUME MOUNT ERROR---EVER.
    And yet I see in the support database that this is a recurring theme that has been seen in other OS storage situations, including Xserve storage where the bug would cause much more business damage.
    I want this fixed NOW.

  • Bug ID Detail 6245922 "Application Server crashes consistently."

    Anyone will have the detail of bug ID 6245922?
    It is one of the bug fixs in "Sun Java System Application Server Enterprise Edition 8.1 2005Q2 Update 2 Release Notes"

    I'm getting the same problem, except if I try to run as administrator then iTunes wants to erase all of my music on the iPod and set it to blank. I normally do not use the administrative account for anything other than administrative tasks.

  • Are the Sun maintainers monitoring bug reports etc. in netscape.server.directory?

    I still follow the netscape.server.directory newsgroup.
    (By the way, I much prefer a newsgroup discussion list to this web forum format, and highly recommend that you reconsider only having this web forum for discussion.) Anyway, the now Netscape/AOL developers have mentioned finding a bug in the 'changelog' code in DS5.1/NS6.0. Since we will be very interested in looking at this 'legacy' feature because of a metadirectory product we have, is this a bug you are aware of and are fixing yourselves?

    Yes we monitor the newsgroup occasionally...
    I also concur on the idea of having something else than this web forum for discussion, but it's not my choice and I do with it.
    The bug you're refering to is known and will be fixed in 5.2.
    However, you can always open a support call if you want a faster resolution.
    Regards,
    Ludovic.

  • Possible bug in the arch NFS server package?

    i run the nfs server on my arch box, exporting to a debian box and an arch laptop. whenever the arch server reboots lately, the shares don't automatically remount properly on the clients. they're listed in mtab, but if i ls or try to access the directories, i get a permission denied error. i have to manually unmount the shares and then remount them again to be able to see/use them.
    the reason i think this is an arch package problem is because i set up a share on the debian box to share with arch, and that worked perfectly. when i rebooted debian as the server, the shares were automatically remounted on the arch client, and when i rebooted arch, the shares were again mounted properly on reboot.
    it's possible i'm doing something wrong with permissions, but it seems unlikely because 1) everything was working fine for a long time until recently, when i started noticing the behavior,  2) all the permissions on the shared directory are identical to the ones on the arch shared directory, all user name UIDs are the same, same groups and GIDs, etc., 3) the shares mount perfectly well manually from the command line, and 4) i set up the debian share/exports, etc. in about 2 minutes with no problem at all, while dealing with this problem on arch for 2 days now, after changing options and going over everything multiple times until my head is spinning. it just seems unlikely that a configuration is wrong, although i guess anything is possible. i can provide all that permissions/group info, fstab info, /etc/exports info, etc. if anyone wants to take a closer look at it.
    so until this is sorted, i wondered if anyone else is having this problem, or if anyone had any ideas of something i might be overlooking. again, everything *seems* to be set up right, but maybe there's some arch specific thing i'm overlooking. thanks.

    Ok out of pure fustration I just grabbed the gentoo init script. Installed start-stop-daemon, modified the script to run as #!/bin/bash, stuck it in /etc/rc.d, rebooted and everything works, I can reboot my computer and clients reconnect. Or restart daemon and they reconnect.
    Heres the script I am using.
    #!/bin/bash
    # Copyright 1999-2005 Gentoo Foundation
    # Distributed under the terms of the GNU General Public License v2
    # $Header: /var/cvsroot/gentoo-x86/net-fs/nfs-utils/files/nfs,v 1.14 2007/03/24 10:14:43 vapier Exp $
    # This script starts/stops the following
    # rpc.statd if necessary (also checked by init.d/nfsmount)
    # rpc.rquotad if exists (from quota package)
    # rpc.nfsd
    # rpc.mountd
    # NB: Config is in /etc/conf.d/nfs
    opts="reload"
    # This variable is used for controlling whether or not to run exportfs -ua;
    # see stop() for more information
    restarting=no
    # The binary locations
    exportfs=/usr/sbin/exportfs
    gssd=/usr/sbin/rpc.gssd
    idmapd=/usr/sbin/rpc.idmapd
    mountd=/usr/sbin/rpc.mountd
    nfsd=/usr/sbin/rpc.nfsd
    rquotad=/usr/sbin/rpc.rquotad
    statd=/usr/sbin/rpc.statd
    svcgssd=/usr/sbin/rpc.svcgssd
    mkdir_nfsdirs() {
    local d
    for d in /var/lib/nfs/{rpc_pipefs,v4recovery,v4root} ; do
    [[ ! -d ${d} ]] && mkdir -p "${d}"
    done
    mount_pipefs() {
    if grep -q rpc_pipefs /proc/filesystems ; then
    if ! grep -q "rpc_pipefs /var/lib/nfs/rpc_pipefs" /proc/mounts ; then
    mount -t rpc_pipefs rpc_pipefs /var/lib/nfs/rpc_pipefs
    fi
    fi
    umount_pipefs() {
    if [[ ${restarting} == "no" ]] ; then
    if grep -q "rpc_pipefs /var/lib/nfs/rpc_pipefs" /proc/mounts ; then
    umount /var/lib/nfs/rpc_pipefs
    fi
    fi
    start_gssd() {
    [[ ! -x ${gssd} || ! -x ${svcgssd} ]] && return 0
    local ret1 ret2
    ${gssd} ${RPCGSSDDOPTS}
    ret1=$?
    ${svcgssd} ${RPCSVCGSSDDOPTS}
    ret2=$?
    return $((${ret1} + ${ret2}))
    stop_gssd() {
    [[ ! -x ${gssd} || ! -x ${svcgssd} ]] && return 0
    local ret
    start-stop-daemon --stop --quiet --exec ${gssd}
    ret1=$?
    start-stop-daemon --stop --quiet --exec ${svcgssd}
    ret2=$?
    return $((${ret1} + ${ret2}))
    start_idmapd() {
    [[ ! -x ${idmapd} ]] && return 0
    ${idmapd} ${RPCIDMAPDOPTS}
    stop_idmapd() {
    [[ ! -x ${idmapd} ]] && return 0
    local ret
    start-stop-daemon --stop --quiet --exec ${idmapd}
    ret=$?
    umount_pipefs
    return ${ret}
    start_statd() {
    # Don't start rpc.statd if already started by init.d/nfsmount
    killall -0 rpc.statd &>/dev/null && return 0
    start-stop-daemon --start --quiet --exec \
    $statd -- $RPCSTATDOPTS 1>&2
    stop_statd() {
    # Don't stop rpc.statd if it's in use by init.d/nfsmount.
    mount -t nfs | grep -q . && return 0
    # Make sure it's actually running
    killall -0 rpc.statd &>/dev/null || return 0
    # Okay, all tests passed, stop rpc.statd
    start-stop-daemon --stop --quiet --exec $statd 1>&2
    waitfor_exportfs() {
    local pid=$1
    ( sleep ${EXPORTFSTIMEOUT:-30}; kill -9 $pid &>/dev/null ) &
    wait $1
    case "$1" in
    start)
    # Make sure nfs support is loaded in the kernel #64709
    if [[ -e /proc/modules ]] && ! grep -qs nfsd /proc/filesystems ; then
    modprobe nfsd &> /dev/null
    fi
    # This is the new "kernel 2.6 way" to handle the exports file
    if grep -qs nfsd /proc/filesystems ; then
    if ! grep -qs "^nfsd[[:space:]]/proc/fs/nfsd[[:space:]]" /proc/mounts ; then
    mount -t nfsd nfsd /proc/fs/nfsd
    fi
    fi
    # now that nfsd is mounted inside /proc, we can safely start mountd later
    mkdir_nfsdirs
    mount_pipefs
    start_idmapd
    start_gssd
    start_statd
    # Exportfs likes to hang if networking isn't working.
    # If that's the case, then try to kill it so the
    # bootup process can continue.
    if grep -q '^/' /etc/exports &>/dev/null; then
    $exportfs -r 1>&2 &
    waitfor_exportfs $!
    fi
    if [ -x $rquotad ]; then
    start-stop-daemon --start --quiet --exec \
    $rquotad -- $RPCRQUOTADOPTS 1>&2
    fi
    start-stop-daemon --start --quiet --exec \
    $nfsd --name nfsd -- $RPCNFSDCOUNT 1>&2
    # Start mountd
    start-stop-daemon --start --quiet --exec \
    $mountd -- $RPCMOUNTDOPTS 1>&2
    stop)
    # Don't check NFSSERVER variable since it might have changed,
    # instead use --oknodo to smooth things over
    start-stop-daemon --stop --quiet --oknodo \
    --exec $mountd 1>&2
    # nfsd sets its process name to [nfsd] so don't look for $nfsd
    start-stop-daemon --stop --quiet --oknodo \
    --name nfsd --user root --signal 2 1>&2
    if [ -x $rquotad ]; then
    start-stop-daemon --stop --quiet --oknodo \
    --exec $rquotad 1>&2
    fi
    # When restarting the NFS server, running "exportfs -ua" probably
    # isn't what the user wants. Running it causes all entries listed
    # in xtab to be removed from the kernel export tables, and the
    # xtab file is cleared. This effectively shuts down all NFS
    # activity, leaving all clients holding stale NFS filehandles,
    # *even* when the NFS server has restarted.
    # That's what you would want if you were shutting down the NFS
    # server for good, or for a long period of time, but not when the
    # NFS server will be running again in short order. In this case,
    # then "exportfs -r" will reread the xtab, and all the current
    # clients will be able to resume NFS activity, *without* needing
    # to umount/(re)mount the filesystem.
    if [ "$restarting" = no ]; then
    # Exportfs likes to hang if networking isn't working.
    # If that's the case, then try to kill it so the
    # shutdown process can continue.
    $exportfs -ua 1>&2 &
    waitfor_exportfs $!
    fi
    stop_statd
    stop_gssd
    stop_idmapd
    umount_pipefs
    reload)
    # Exportfs likes to hang if networking isn't working.
    # If that's the case, then try to kill it so the
    # bootup process can continue.
    $exportfs -r 1>&2 &
    waitfor_exportfs $!
    restart)
    # See long comment in stop() regarding "restarting" and exportfs -ua
    restarting=yes
    svc_stop
    svc_start
    echo "usage: $0 {start|stop|restart}"
    esac
    exit 0

  • BUG - FTP with case sensitive server

    My provider uses case sensitive file and directory names. I
    created a directory 'OtherStuff' on the server directly (not from
    remote view). In Dreamweaver's FTP setup I erred by specifying the
    subdirectory of 'otherstuff'. When I created a default html file
    and PUT it, FTP failed to find the directory but continued and put
    the default html file in the root, overwriting my default web file.
    Yes, I erred. However, it would be good if Dreamweaver would
    stop when it encounters a cd failure.

    jjstafford wrote:
    > My provider uses case sensitive file and directory
    names. I created a
    > directory 'OtherStuff' on the server directly (not from
    remote view). In
    > Dreamweaver's FTP setup I erred by specifying the
    subdirectory of 'otherstuff'.
    > When I created a default html file and PUT it, FTP
    failed to find the
    > directory but continued and put the default html file in
    the root, overwriting
    > my default web file.
    >
    > Yes, I erred. However, it would be good if Dreamweaver
    would stop when it
    > encounters a cd failure.
    >
    >
    You can enable case sensitive link checking in the site
    manager.
    However, this is more the issue of the OS, rather then DW.
    Windows -
    which is the OS you use, I assume - is about the only OS
    (that I know
    of) which is case insensitive!
    All Linux/Unix based servers are case sensitive.

  • Bug when using Edit in Server Option while editting HiQ node.

    I have recently started using HiQ nodes in my applications, but am running into problems. If I enter the "Edit in Server Option" and then change applications or click on another window without closing the HiQ server window I run into problems. I can no longer enter the "Edit in Server Option" and the HiQ program will not run. I can not seem to find the HiQ server window. The only way I have found around this problem is to close the current document, reopen it, delete the HiQ node and rewrite the entire code, and be very careful to always close the HiQ server window before doing anything else. Of course this is not a suitable solution. Has anyone else run into this problem? Have you figured out a fix?
    Thanks,

    Hi,
    Base on my experience, it seems one of your node have not synced
     the resource configuration, please run the cluster validation report test then post the error or warning part.
    Thanks.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Cirrus has a bug for a long time

    [ Before start a discussion, I tell you that my English is not good. If you read kindly, I appreciate you ]
    I found a bug from Cirrus network connection in AS3.
    But I liked Cirrus very very much, because It needs no AIR Runtime for Server and has UTP Hole punching and etc...
    So I just waited and waited that the bug will fixed by ADOBE.
    I wanted it very very much.
    But a few years laters, the bug still exists.
    I explain the bug on this post.
    [Server] Netstream.publish('name')
    [Cient] Netsteram.play('name')
    This methos makes Cirrus connection between Two peers finally.
    I followed example code sincerely in Adobe document.
    It succeed.
    But It failed sometimes.( This is the bug! )
    ## FEATURE ##
    1. There is about 40% probability of fail to make a connection.
    2. There is no event of disconnection.
    3. There are no error messages and any no response of failure of connecting.
    4. If server's IP address and client's IP address are same, It succed 100% to make a connection.
    5. If a client fails to connect a server once, the client will never succed to connect same server even though It tries again and again.
       If the client has to connect the server, the client needs to refresh the program to gets a new Netconnection ID.
    6. If a client fails to connect a server, another clients have some probablities to connect to same server.
    I made many online games with Cirrus and a lot of users are playing with that.
    They need safe connecting method very much.
    Please help them and me.
    Thank you.

    1. P2P connections are not always possible depending on the configuration of NATs and firewalls.  for an explanation, please see this posting:
      https://forums.adobe.com/message/1064983#1064983
    to understand what's really happening, please see
      http://tools.ietf.org/html/rfc7016#section-3.5.1.6
    2, 3. there should be an event after a timeout, around 2 minutes.
    4. that's expected.  the "behind NAT" local addresses should be reachable on the same computer.
    5. that's expected.  the failure to connect is because a P2P path can't be found.  see answer 1 above.  however, you don't need to reload the SWF -- you just need to make a new NetConnection to get a new peer ID.
    6. that's expected, see answer 1 above.
    since P2P connections aren't always possible, you might need to provide a server for client-server-client fallback when P2P doesn't work.

  • 10.5 server, 10.4 clients getting multiple mobile accounts - weird results

    I would like to reopen this discussion:
    http://discussions.apple.com/thread.jspa?threadID=1664772&tstart=7
    What happens visually is that the user appears to log in to a network account, but the Macintosh HD icon changes to the "house" used for the home directory, and all the mobile account data (which is naturally in /Users/<login>) is not accessible. If you use Netinfo Manager or System Preferences, you can see multiple accounts for the user.
    We have been getting many laptops randomly succumbing to this bug. 10.5.8 server, 10.4.11 clients. I ran nicl on one that was affected today, with "nicl . -list /users", and found 3 user account records with the same login. I then used the "directory IDs" from the nicl -list commands and compared the data for each account with "nicl -v . -read <dirID>" replacing <dirID> with the numeric directory IDs for the accounts.
    One of the accounts had no "home" attribute, so I deleted it using "sudo nicl . -delete <dirID>". The only difference between the other accounts is the value of the "copy_timestamp" attribute (it differed by 20 seconds or so). I blindly removed the record with the later copy_timestamp value, after which I was able to login to the mobile account normally.
    Interestingly during the login, I pinged the machine rapidly over ssh, running the "nicl . -list /users" command. I could see the original directory ID. Then for a while a new directory ID appeared and the old one was gone. Then both the old and the new appeared. Finally, after the successful login, the old directory ID was back. I guess the mobile account login process is constantly banging on Netinfo.
    Another thing to note is that when I go to Workgroup Manager (10.5) and bring up the Mobility > Acount Creation preferences, they show up with the "Never" and "Always" buttons half-selected ("-"), as well as the one for the "Show "Don't ask me again" checkbox" setting. Guess the com.apple.MCX.plist file schema changed from 10.4 to 10.5. I will research the differences. Maybe I'll get lucky and stop this behavior from happening...

    The thing that causes the "-" half-slected buttons on the Account Creation tab is the absence of a value for the (new in 10.5?) attribute in the com.apple.MCX plist file. You can find this by using the Inspector in Workgroup Manager, getting the user account and editing the MCXSettings attrbute:
    cachedaccounts.WarnOnCreate.allowNever
    otherwise known as "Show Mobile Account Dialog's Never Option" if you look in the Details tab of Workgroup Manager,
    otherwise known as "Show "Don't ask me again" checkbox" if you look in the Account Creation tab of Workgroup Manager.
    Pet peeve -- three different terms for the same thing?

  • Weblogic server 10.3.6 or 10.3.5?

    Hi
    We are currently on web logic server 10.3.5. JDK 1.6 is certified for this.
    As part of OBIEE upgrade from 11.1.1.5 to 11.1.1.6.2 we are looking at upgrading web logic server to 10.3.6.
    If we do this then we must also upgrade JDK to 1.7.
    Has anyone done this? Are there significant benefits to doing so, or should we just leave WLS to 10.3.5 and save ourselves having to upgrade JDK too?
    Thanks for any tips,
    DA.

    Hi,
    For the Oracle Certification Matrix ref:(just check your jdk version)
    Please check OBIEE11.1.1..6.0(Current System Certification matrix)
    http://www.oracle.com/technetwork/middleware/bi-enterprise-edition/bi-11gr1certmatrix-166168.xls
    Patch for weblogic 10.3.6.0 version
    You Can downloaded upgrade installer for 10.3.6 from support.oracle.com - Patch 13529639: PLACEHOLDER BUG FOR WEBLOGIC SERVER 11GR1 (10.3.6) UPGRADE INSTALLER
    the webLogic server 10.3.6.0 patch set to be applied to existing bugs fixed in WebLogic Server 10.3.5.0 installations, or prior WebLogic Server 10.3.X installations like Jdeveloper,ADF bugs resolved
    Note:Oracle WebLogic Server to either version 10.3.6 or 10.3.5 (both are supported in Release OBIEE 11.1.1.6.0)
    just refer the upgrade steps
    Re: upgrade 11.1.1.5 to 11.1.1.6 which Oracle BI Product Installer?
    Upgrade 11.1.1.5 to 11.1.1.6
    Thanks
    Deva

  • Not a GROUP BY expression - Oracle 10g bug?

    Hi,
    I am geting 00979. 00000 - "not a GROUP BY expression" error on Oracle 10g 10.2.0.4.0 - 64bit Production.
    To illustrate my problem I created following example.
    Let think I have some shop with clothes. Everytime I sell something, I store this information in the database - storing actual time, clothes type (trousers, socks, ...) and the size of the piece (M, L, XL, ...).
    Now, system counts statistics every hour. So it goes thrue the table with sold pieces and counts the number of pieces per clothes type and per size from the beginning of the day. It is important to realize that it is from the beginning of the day. Because of that, the number of sold pieces in the statistical table grows every hour (or is at least on the same value as in previous hour).
    Now, from this statistical table I need to make new statistic. I want a statistic how many pieces per size I sold every hour.
    I created this query for that:
    SELECT TIME, xSIZE, (SOLD  - NVL((SELECT SUM(S1.SOLD)
                                      FROM STATISTICS S1
                                      WHERE S1.xSIZE = S.xSIZE
                                        AND TRUNC(S1.TIME, 'HH24') + 1/24 = S.TIME
                                        AND TO_CHAR(S1.TIME, 'HH24') != '23'
                                        AND S1.xSIZE IS NOT NULL
                                      GROUP BY TRUNC(S1.TIME, 'HH24'), S1.xSIZE),0)) SOLD
    FROM(
    SELECT TRUNC(S.TIME, 'HH24') TIME, S.xSIZE, SUM(S.SOLD) SOLD
    FROM STATISTICS S
    WHERE S.xSIZE IS NOT NULL
    GROUP BY TRUNC(S.TIME, 'HH24'), S.xSIZE
    --ORDER BY 1 DESC
    ) S
    ORDER BY TIME DESC, xSIZE ASCFirst I select number of sold pieces per hour per size. To get number of sold pieces for particular hour, I need to substract from this value number of sold pieces from previous hour. I decided to do this with parameter query...
    Running the query like this I get "not a GROUP BY expression" error. However if I uncomment the "ORDER BY 1 DESC" statement, the query works. I am pretty sure it has to do something with this line:
    AND TRUNC(S1.TIME, 'HH24') + 1/24 = S.TIME
    If you modify this query like this:
    SELECT TIME, xSIZE, (SOLD  - NVL((SELECT SUM(S1.SOLD)
                                      FROM STATISTICS S1
                                      WHERE S1.xSIZE = S.xSIZE
                                        --AND TRUNC(S1.TIME, 'HH24') + 1/24 = S.TIME
                                        AND TO_CHAR(S1.TIME, 'HH24') != '23'
                                        AND S1.xSIZE IS NOT NULL
                                      GROUP BY  S1.xSIZE),0)) SOLD
    FROM(
    SELECT TRUNC(S.TIME, 'HH24') TIME, S.xSIZE, SUM(S.SOLD) SOLD
    FROM STATISTICS S
    WHERE S.xSIZE IS NOT NULL
    GROUP BY TRUNC(S.TIME, 'HH24'), S.xSIZE
    --ORDER BY 1 DESC
    ) S
    ORDER BY TIME DESC, xSIZE ASCRemoved joining the tables on truncated time and grouping by the truncated time -> The query does not fail...
    And now the best...if you run the first query on Oracle 11g (Release 11.1.0.6.0 - 64bit Production), it works.
    Does anybody know why is the first query not working on 10g? Is there some bug or limitation for this server version?
    Please don't say me to rewrite the query in another way, I already did it, so it works on 10g as well. I am just curious why it doesn't work on 10g.
    Finally here are some data for testing.
    CREATE TABLE STATISTICS(
      TIME DATE DEFAULT SYSDATE,
      TYPE VARCHAR2(20),
      xSIZE VARCHAR2(2),
      SOLD NUMBER(5,0) DEFAULT 0
    INSERT INTO STATISTICS(TIME, TYPE, xSIZE, SOLD) VALUES(SYSDATE - 2/24, 'T-Shirt', 'M', 10);
    INSERT INTO STATISTICS(TIME, TYPE, xSIZE, SOLD) VALUES(SYSDATE - 2/24, 'Socks', 'M', 3);
    INSERT INTO STATISTICS(TIME, TYPE, xSIZE, SOLD) VALUES(SYSDATE - 2/24, 'T-Shirt', 'L', 1);
    INSERT INTO STATISTICS(TIME, TYPE, xSIZE, SOLD) VALUES(SYSDATE - 2/24, 'Socks', 'L', 50);
    INSERT INTO STATISTICS(TIME, TYPE, xSIZE, SOLD) VALUES(SYSDATE - 2/24, 'Trousers', 'XL', 7);
    INSERT INTO STATISTICS(TIME, TYPE, xSIZE, SOLD) VALUES(SYSDATE - 2/24, 'Socks', 'XL', 3);
    INSERT INTO STATISTICS(TIME, TYPE, xSIZE, SOLD) VALUES(SYSDATE - 1/24, 'T-Shirt', 'M', 13);
    INSERT INTO STATISTICS(TIME, TYPE, xSIZE, SOLD) VALUES(SYSDATE - 1/24, 'Socks', 'L', 60);
    INSERT INTO STATISTICS(TIME, TYPE, xSIZE, SOLD) VALUES(SYSDATE - 1/24, 'Trousers', 'XL', 15);
    INSERT INTO STATISTICS(TIME, TYPE, xSIZE, SOLD) VALUES(SYSDATE - 1/24, 'Socks', 'XL', 6);Edited by: user12047225 on 20.9.2011 23:12
    Edited by: user12047225 on 20.9.2011 23:45

    It is a known issue when optimizer decides to expand in-line view. You can add something (besides ORDER BY you already used) to in-line view to prevent optimizer from expanding it. For example:
    SQL> SELECT  TIME,
      2          xSIZE,
      3          (SOLD - NVL(
      4                      (
      5                       SELECT  SUM(S1.SOLD)
      6                         FROM  STATISTICS S1
      7                         WHERE S1.xSIZE = S.xSIZE
      8                           AND TRUNC(S1.TIME, 'HH24') + 1/24 = S.TIME
      9                           AND TO_CHAR(S1.TIME, 'HH24') != '23'
    10                           AND S1.xSIZE IS NOT NULL
    11                           GROUP BY TRUNC(S1.TIME, 'HH24'),
    12                                    S1.xSIZE
    13                      ),
    14                      0
    15                     )
    16          ) SOLD
    17    FROM  (
    18           SELECT  TRUNC(S.TIME, 'HH24') TIME,
    19                   S.xSIZE,
    20                   SUM(S.SOLD) SOLD
    21             FROM  STATISTICS S
    22             WHERE S.xSIZE IS NOT NULL
    23             GROUP BY TRUNC(S.TIME, 'HH24'),
    24                      S.xSIZE
    25           --ORDER BY 1 DESC
    26          ) S
    27    ORDER BY TIME DESC,
    28             xSIZE ASC
    29  /
             SELECT  TRUNC(S.TIME, 'HH24') TIME,
    ERROR at line 18:
    ORA-00979: not a GROUP BY expression
    SQL> SELECT  TIME,
      2          xSIZE,
      3          (SOLD - NVL(
      4                      (
      5                       SELECT  SUM(S1.SOLD)
      6                         FROM  STATISTICS S1
      7                         WHERE S1.xSIZE = S.xSIZE
      8                           AND TRUNC(S1.TIME, 'HH24') + 1/24 = S.TIME
      9                           AND TO_CHAR(S1.TIME, 'HH24') != '23'
    10                           AND S1.xSIZE IS NOT NULL
    11                           GROUP BY TRUNC(S1.TIME, 'HH24'),
    12                                    S1.xSIZE
    13                      ),
    14                      0
    15                     )
    16          ) SOLD
    17    FROM  (
    18           SELECT  TRUNC(S.TIME, 'HH24') TIME,
    19                   S.xSIZE,
    20                   SUM(S.SOLD) SOLD,
    21                   ROW_NUMBER() OVER(ORDER BY SUM(S.SOLD)) RN
    22             FROM  STATISTICS S
    23             WHERE S.xSIZE IS NOT NULL
    24             GROUP BY TRUNC(S.TIME, 'HH24'),
    25                      S.xSIZE
    26           --ORDER BY 1 DESC
    27          ) S
    28    ORDER BY TIME DESC,
    29             xSIZE ASC
    30  /
    TIME      XS       SOLD
    20-SEP-11 L           9
    20-SEP-11 M           0
    20-SEP-11 XL         11
    20-SEP-11 L          51
    20-SEP-11 M          13
    20-SEP-11 XL         10
    6 rows selected.
    SQL> Or use subquery factoring (WITH clause) + undocumented hint MATERIALIZE:
    SQL> WITH S AS (
      2             SELECT  /*+ MATERIALIZE */ TRUNC(S.TIME, 'HH24') TIME,
      3                     S.xSIZE,
      4                     SUM(S.SOLD) SOLD
      5               FROM  STATISTICS S
      6               WHERE S.xSIZE IS NOT NULL
      7               GROUP BY TRUNC(S.TIME, 'HH24'),
      8                        S.xSIZE
      9             --ORDER BY 1 DESC
    10            )
    11  SELECT  TIME,
    12          xSIZE,
    13          (SOLD - NVL(
    14                      (
    15                       SELECT  SUM(S1.SOLD)
    16                         FROM  STATISTICS S1
    17                         WHERE S1.xSIZE = S.xSIZE
    18                           AND TRUNC(S1.TIME, 'HH24') + 1/24 = S.TIME
    19                           AND TO_CHAR(S1.TIME, 'HH24') != '23'
    20                           AND S1.xSIZE IS NOT NULL
    21                           GROUP BY TRUNC(S1.TIME, 'HH24'),
    22                                    S1.xSIZE
    23                      ),
    24                      0
    25                     )
    26          ) SOLD
    27    FROM  S
    28    ORDER BY TIME DESC,
    29             xSIZE ASC
    30  /
    TIME      XS       SOLD
    20-SEP-11 L           9
    20-SEP-11 M           0
    20-SEP-11 XL         11
    20-SEP-11 L          51
    20-SEP-11 M          13
    20-SEP-11 XL         10
    6 rows selected.
    SQL> SY.

  • Another bug in Oracle Soa Suite 11gR3 (bpel workflow)

    Hello,
    I am getting an error that i guess be a bug in soa suite (bpel human task workflow).
    I created a simple workflow that have three human task (three steps of approval):
    1 ) I deployed the project
    2) I invite a requisition for the bpel through SoapUI
    3) I access the worklistapp application and make login using the user approver1
    4) I click at Approve button. It works.
    5) When i logout and login using the user approver2 (second human task - second level of approval) the requistion is there, but when i click in task the bellow exception is throwed:
    Obs.: If i restart my weblogic server, and access the worklistapp with the user approver2 (again). It Works.
    Question: Should I always restart the Weblogic Server for each step in a workflow ? If my bpel have 5 human tasks, i have to do 5 restarts on the weblogic server for work ?
    Anyone has the same bug ?
    Error 500--Internal Server Error
    java.lang.NullPointerException
         at oracle.adf.model.binding.DCIteratorBinding.getSortCriteria(DCIteratorBinding.java:3715)
         at oracle.adf.model.binding.DCInvokeMethod.setAssociatedIteratorBinding(DCInvokeMethod.java:865)
         at oracle.adf.model.binding.DCIteratorBinding.cacheRefOnOperation(DCIteratorBinding.java:5132)
         at oracle.jbo.uicli.binding.JUMethodIteratorDef$JUMethodIteratorBinding.getActionBinding(JUMethodIteratorDef.java:283)
         at oracle.jbo.uicli.binding.JUMethodIteratorDef.isRefreshable(JUMethodIteratorDef.java:59)
         at oracle.adf.model.binding.DCExecutableBindingDef.isRefreshable(DCExecutableBindingDef.java:274)
         at oracle.adf.model.binding.DCBindingContainer.internalRefreshControl(DCBindingContainer.java:2975)
         at oracle.adf.model.binding.DCBindingContainer.refresh(DCBindingContainer.java:2845)
         at oracle.adf.controller.v2.lifecycle.PageLifecycleImpl.prepareModel(PageLifecycleImpl.java:112)
         at oracle.adf.controller.v2.lifecycle.Lifecycle$2.execute(Lifecycle.java:137)
         at oracle.adfinternal.controller.lifecycle.LifecycleImpl.executePhase(LifecycleImpl.java:192)
         at oracle.adfinternal.controller.faces.lifecycle.ADFPhaseListener.access$400(ADFPhaseListener.java:21)
         at oracle.adfinternal.controller.faces.lifecycle.ADFPhaseListener$PhaseInvokerImpl.startPageLifecycle(ADFPhaseListener.java:231)
         at oracle.adfinternal.controller.faces.lifecycle.ADFPhaseListener$1.after(ADFPhaseListener.java:267)
         at oracle.adfinternal.controller.faces.lifecycle.ADFPhaseListener.afterPhase(ADFPhaseListener.java:71)
         at oracle.adfinternal.controller.faces.lifecycle.ADFLifecyclePhaseListener.afterPhase(ADFLifecyclePhaseListener.java:53)
         at oracle.adfinternal.view.faces.lifecycle.LifecycleImpl._executePhase(LifecycleImpl.java:364)
         at oracle.adfinternal.view.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:177)
         at javax.faces.webapp.FacesServlet.service(FacesServlet.java:265)
         at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
         at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
         at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:300)
         at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at oracle.adf.model.servlet.ADFBindingFilter.doFilter(ADFBindingFilter.java:191)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at oracle.adf.share.http.ServletADFFilter.doFilter(ServletADFFilter.java:62)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at oracle.adfinternal.view.faces.webapp.rich.RegistrationFilter.doFilter(RegistrationFilter.java:97)
         at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl$FilterListChain.doFilter(TrinidadFilterImpl.java:420)
         at oracle.adfinternal.view.faces.activedata.AdsFilter.doFilter(AdsFilter.java:60)
         at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl$FilterListChain.doFilter(TrinidadFilterImpl.java:420)
         at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl._doFilterImpl(TrinidadFilterImpl.java:247)
         at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl.doFilter(TrinidadFilterImpl.java:157)
         at org.apache.myfaces.trinidad.webapp.TrinidadFilter.doFilter(TrinidadFilter.java:92)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at oracle.bpel.services.workflow.client.worklist.util.WorkflowFilter.doFilter(WorkflowFilter.java:175)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at oracle.adf.library.webapp.LibraryFilter.doFilter(LibraryFilter.java:159)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at oracle.dms.wls.DMSServletFilter.doFilter(DMSServletFilter.java:330)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at weblogic.servlet.internal.RequestEventsFilter.doFilter(RequestEventsFilter.java:27)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.doIt(WebAppServletContext.java:3684)
         at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3650)
         at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
         at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
         at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2268)
         at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2174)
         at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1446)
         at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
         at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)

    Thank you, Tomasz!
    That really works! I'll add example of changes to DataBinding.cpx because it is not so obvious what to do after renaming package and applying automatic refactoring (that doesn't fix all what it should fix).
    Assuming we've renamed the package from bpm3 to bpm3.db3
    Before changes:
    <?xml version="1.0" encoding="UTF-8" ?>
    <Application xmlns="http://xmlns.oracle.com/adfm/application"
    version="11.1.1.60.13" id="DataBindings" SeparateXMLFiles="false"
    Package="bpm3.db3" ClientType="Generic">
    <pageMap>
    <page path="/taskDetails1.jspx" usageId="bpm3_taskDetails1PageDef"/>
    </pageMap>
    <pageDefinitionUsages>
    <page id="bpm3_taskDetails1PageDef"
    path="bpm3.db3.pageDefs.taskDetails1PageDef"/>
    </pageDefinitionUsages>
    <dataControlUsages>
    *<dc id="ApproveForm3_ApproveTask3" path="bpm3.ApproveForm3_ApproveTask3"/>*
    </dataControlUsages>
    </Application>
    After:
    <?xml version="1.0" encoding="UTF-8" ?>
    <Application xmlns="http://xmlns.oracle.com/adfm/application"
    version="11.1.1.60.13" id="DataBindings" SeparateXMLFiles="false"
    Package="bpm3.db3" ClientType="Generic">
    <pageMap>
    <page path="/taskDetails1.jspx" usageId="bpm3_db3_taskDetails1PageDef"/>
    </pageMap>
    <pageDefinitionUsages>
    <page id="bpm3_db3_taskDetails1PageDef"
    path="bpm3.db3.pageDefs.taskDetails1PageDef"/>
    </pageDefinitionUsages>
    <dataControlUsages>
    *<dc id="ApproveForm3_ApproveTask3" path="bpm3.db3.ApproveForm3_ApproveTask3"/>*
    </dataControlUsages>
    </Application>
    Really what is changed:
    - <dc id="ApproveForm3_ApproveTask3" path="bpm3.ApproveForm3_ApproveTask3"/>
    + <dc id="ApproveForm3_ApproveTask3" path="bpm3.db3.ApproveForm3_ApproveTask3"/>

Maybe you are looking for