Perverse behaviour of LR 3.4.1 on photo import

After updating to 3.4.1 import seems to behave differently. It is perverse, counter-intuitive and frustrating (and almost certainly a user error!). Could somebody volunteer to help reduce my blood pressure?
Since the first LR beta in 2006 I have arranged my images in following structure (sorry this is wordy and maybe dificult to follow):
<LR Catalog No. x>
     <Major Category> - folder name expressed as text
          <year>  - folder name expressedas  'yyyy'
               <date of import> - folder name expressed as 'yyyymmdd'
I use PhotoMechanic to ingest (import)  images, which writes them into the LR catalog structure above and also onto a second hard disk, 'unknown' to LR, as a backup. PhotoMechanic scripts automatically create the lowest level folder names.
Immediately after the ingest LR has no knowlege of the new photos. I then import them, 'in-place' as it were, using the functions of LR. The file structure shown above then appears in the Folders panel on the LHS of the LR display.This has worked fine since 2007.
For the last few weeks of following this process - after the update to LR 3.4.1 - the import step within LR results in the <year>. folder now appearing at the same level as  the <Major Category> folder. So now the Folders Pane shows:
<2011>
     <20110604>
<Major Category>
     <2004>
     <2005>
     <2011>
          But no folder <20110604>
However, the structure actually on the hard disk, as viewed by Windows explorer, is as I expect it to be. The LR Folders Pane is inconsistent with the physical directory structure.
This problem looks similar to that posted by user Harveyabc about 2 days ago, with the topic 'wind 7 after importing, pics not appearing under My Pictures in left panel'
but the solution suggested there has no effect on my system. I am using 64 bit LR under Win 7 Home Premium edition.
Could I have some suggestions please?
Thanks
Tony

I should add some additional information: the import within LR results in 2 copies of a folder being created on the disk, but only one of them showing in the Folders Pane. For example, images in the folder <20110601> are imported into a new folder <2011-06-01> which LR creates and is shown in LR in this folder name, while the original folder (<20110601>) remains on the disk, still containing the original of the images, as placed there by PhotoMechanic.
I do not understand this action, nor how LR is choosing the format of the name of the new folder  it is creating.

Similar Messages

  • Making a backup of my Time Capsule

    Bonjour,
    Sorry if this has already been asked; I could not find what I was looking for.
    I have a Time Capsule (500 GB) which connects to a MacBook Pro, an XP desktop and a Vista laptop. I make backups of my PCs each week (ish) and store them on an external drive in a vault. (yes, I have been scorched)
    In the two months since I bought it, I notice that it has started filling up alarmingly. My Mac backup directory is roughly 356 Go, whereas I know that I am only using about 75 Go on my hard drive.
    Is there a way of copying my Time Machine's backup to an external drive? On my PCs, I simply backup direct to external drive. I would love to be able to:
    1 - from my PCs, backup direct to the Time Capsule
    2 - copy this and my latest Mac backup to an external drive.
    Is it possible? Why is the MAc backup so huge?
    Thanks for your time,
    Rick

    RickIsTrying wrote:
    if I have 385Go of info on my TC but only 75 on my hard drive, would it make an archive equivalent to the 75G?
    The amount that will be backed up will be the 385 GB.
    For the PC backup, it was not too bad; I select a destination for the backup files and I just point to the network disk, the TC.
    Will that really work? I wouldn't expect Windows to understand a disk formatted to satisfy Time Machine.
    It's annoying about the TC behaviour. I would like to backup photos and songs from the various computers in the house, but from what I gather, eventually all the space will be used up by the TM software.
    It's understandable why it works that way. It's trying to preserve as much backed-up data as it can.
    You can reserve space on the Time Capsule disk by creating a "disk image" (.dmg file) and copying it to the Time Capsule disk. Don't make it a "sparse disk image"; you need to allocate all the space you want to begin with. To use the disk image you'll have to first "mount" it. That technique won't work with a Windows computer, as it won't know anything about a Mac OS X disk image.
    Is there a way of purging older versions (there's a blast from my VMS days...)?
    That happens automatically as the Time Machine disk fills up. You could try mounting the Time Machine "sparsebundle" file, but (1) I'm not sure how easy it would be to delete anything in it, (2) it would be easy to mess up all your Time Machine backups, and (3) doing that wouldn't necessarily free up any space unless you could perform some sort of "compress" operation on the "sparsebundle" file. (I worked on VMS as recently as 2004.)

  • Samba problems between two linux computers

    I have a laptop with arch with this smb.conf
    # This is the main Samba configuration file. You should read the
    # smb.conf(5) manual page in order to understand the options listed
    # here. Samba has a huge number of configurable options (perhaps too
    # many!) most of which are not shown in this example
    # For a step to step guide on installing, configuring and using samba,
    # read the Samba-HOWTO-Collection. This may be obtained from:
    # http://www.samba.org/samba/docs/Samba-HOWTO-Collection.pdf
    # Many working examples of smb.conf files can be found in the
    # Samba-Guide which is generated daily and can be downloaded from:
    # http://www.samba.org/samba/docs/Samba-Guide.pdf
    # Any line which starts with a ; (semi-colon) or a # (hash)
    # is a comment and is ignored. In this example we will use a #
    # for commentry and a ; for parts of the config file that you
    # may wish to enable
    # NOTE: Whenever you modify this file you should run the command "testparm"
    # to check that you have not made any basic syntactic errors.
    #======================= Global Settings =====================================
    [global]
    # workgroup = NT-Domain-Name or Workgroup-Name, eg: MIDEARTH
    workgroup = WORKGROUP
    # server string is the equivalent of the NT Description field
    server string = Samba Server
    # Security mode. Defines in which mode Samba will operate. Possible
    # values are share, user, server, domain and ads. Most people will want
    # user level security. See the Samba-HOWTO-Collection for details.
    security = user
    # This option is important for security. It allows you to restrict
    # connections to machines which are on your local network. The
    # following example restricts access to two C class networks and
    # the "loopback" interface. For more examples of the syntax see
    # the smb.conf man page
    ; hosts allow = 192.168.1. 192.168.2. 127.
    # If you want to automatically load your printer list rather
    # than setting them up individually then you'll need this
    load printers = yes
    # you may wish to override the location of the printcap file
    ; printcap name = /etc/printcap
    # on SystemV system setting printcap name to lpstat should allow
    # you to automatically obtain a printer list from the SystemV spool
    # system
    ; printcap name = lpstat
    # It should not be necessary to specify the print system type unless
    # it is non-standard. Currently supported print systems include:
    # bsd, cups, sysv, plp, lprng, aix, hpux, qnx
    ; printing = cups
    # Uncomment this if you want a guest account, you must add this to /etc/passwd
    # otherwise the user "nobody" is used
    ; guest account = pcguest
    # this tells Samba to use a separate log file for each machine
    # that connects
    log file = /var/log/samba/%m.log
    # Put a capping on the size of the log files (in Kb).
    max log size = 50
    # Use password server option only with security = server
    # The argument list may include:
    # password server = My_PDC_Name [My_BDC_Name] [My_Next_BDC_Name]
    # or to auto-locate the domain controller/s
    # password server = *
    ; password server = <NT-Server-Name>
    # Use the realm option only with security = ads
    # Specifies the Active Directory realm the host is part of
    ; realm = MY_REALM
    # Backend to store user information in. New installations should
    # use either tdbsam or ldapsam. smbpasswd is available for backwards
    # compatibility. tdbsam requires no further configuration.
    ; passdb backend = tdbsam
    # Using the following line enables you to customise your configuration
    # on a per machine basis. The %m gets replaced with the netbios name
    # of the machine that is connecting.
    # Note: Consider carefully the location in the configuration file of
    # this line. The included file is read at that point.
    ; include = /usr/local/samba/lib/smb.conf.%m
    # Configure Samba to use multiple interfaces
    # If you have multiple network interfaces then you must list them
    # here. See the man page for details.
    ; interfaces = 192.168.12.2/24 192.168.13.2/24
    # Browser Control Options:
    # set local master to no if you don't want Samba to become a master
    # browser on your network. Otherwise the normal election rules apply
    ; local master = no
    # OS Level determines the precedence of this server in master browser
    # elections. The default value should be reasonable
    ; os level = 33
    # Domain Master specifies Samba to be the Domain Master Browser. This
    # allows Samba to collate browse lists between subnets. Don't use this
    # if you already have a Windows NT domain controller doing this job
    ; domain master = yes
    # Preferred Master causes Samba to force a local browser election on startup
    # and gives it a slightly higher chance of winning the election
    ; preferred master = yes
    # Enable this if you want Samba to be a domain logon server for
    # Windows95 workstations.
    ; domain logons = yes
    # if you enable domain logons then you may want a per-machine or
    # per user logon script
    # run a specific logon batch file per workstation (machine)
    ; logon script = %m.bat
    # run a specific logon batch file per username
    ; logon script = %U.bat
    # Where to store roving profiles (only for Win95 and WinNT)
    # %L substitutes for this servers netbios name, %U is username
    # You must uncomment the [Profiles] share below
    ; logon path = \\%L\Profiles\%U
    # Windows Internet Name Serving Support Section:
    # WINS Support - Tells the NMBD component of Samba to enable it's WINS Server
    ; wins support = yes
    # WINS Server - Tells the NMBD components of Samba to be a WINS Client
    # Note: Samba can be either a WINS Server, or a WINS Client, but NOT both
    ; wins server = w.x.y.z
    # WINS Proxy - Tells Samba to answer name resolution queries on
    # behalf of a non WINS capable client, for this to work there must be
    # at least one WINS Server on the network. The default is NO.
    ; wins proxy = yes
    # DNS Proxy - tells Samba whether or not to try to resolve NetBIOS names
    # via DNS nslookups. The default is NO.
    dns proxy = no
    # These scripts are used on a domain controller or stand-alone
    # machine to add or delete corresponding unix accounts
    ; add user script = /usr/sbin/useradd %u
    ; add group script = /usr/sbin/groupadd %g
    ; add machine script = /usr/sbin/adduser -n -g machines -c Machine -d /dev/null -s /bin/false %u
    ; delete user script = /usr/sbin/userdel %u
    ; delete user from group script = /usr/sbin/deluser %u %g
    ; delete group script = /usr/sbin/groupdel %g
    #============================ Share Definitions ==============================
    wins support = no
    [homes]
    comment = Home Directories
    browseable = yes
    writable = yes
    # Un-comment the following and create the netlogon directory for Domain Logons
    ; [netlogon]
    ; comment = Network Logon Service
    ; path = /usr/local/samba/lib/netlogon
    ; guest ok = yes
    ; writable = no
    ; share modes = no
    # Un-comment the following to provide a specific roving profile share
    # the default is to use the user's home directory
    ;[Profiles]
    ; path = /usr/local/samba/profiles
    ; browseable = no
    ; guest ok = yes
    # NOTE: If you have a BSD-style print system there is no need to
    # specifically define each individual printer
    [printers]
    comment = All Printers
    path = /var/spool/samba
    browseable = yes
    # Set public = yes to allow user 'guest account' to print
    guest ok = no
    writable = no
    printable = yes
    # This one is useful for people to share files
    ;[tmp]
    ; comment = Temporary file space
    ; path = /tmp
    ; read only = no
    ; public = yes
    # A publicly accessible directory, but read only, except for people in
    # the "staff" group
    ;[public]
    ; comment = Public Stuff
    ; path = /home/samba
    ; public = yes
    ; writable = no
    ; printable = no
    ; write list = @staff
    # Other examples.
    # A private printer, usable only by fred. Spool data will be placed in fred's
    # home directory. Note that fred must have write access to the spool directory,
    # wherever it is.
    ;[fredsprn]
    ; comment = Fred's Printer
    ; valid users = fred
    ; path = /homes/fred
    ; printer = freds_printer
    ; public = no
    ; writable = no
    ; printable = yes
    # A private directory, usable only by fred. Note that fred requires write
    # access to the directory.
    ;[fredsdir]
    ; comment = Fred's Service
    ; path = /usr/somewhere/private
    ; valid users = fred
    ; public = no
    ; writable = yes
    ; printable = no
    # a service which has a different directory for each machine that connects
    # this allows you to tailor configurations to incoming machines. You could
    # also use the %U option to tailor it by user name.
    # The %m gets replaced with the machine name that is connecting.
    ;[pchome]
    ; comment = PC Directories
    ; path = /usr/pc/%m
    ; public = no
    ; writable = yes
    # A publicly accessible directory, read/write to all users. Note that all files
    # created in the directory by users will be owned by the default user, so
    # any user with access can delete any other user's files. Obviously this
    # directory must be writable by the default user. Another user could of course
    # be specified, in which case all files would be owned by that user instead.
    ;[public]
    ; path = /usr/somewhere/else/public
    ; public = yes
    ; only guest = yes
    ; writable = yes
    ; printable = no
    # The following two entries demonstrate how to share a directory so that two
    # users can place files there that will be owned by the specific users. In this
    # setup, the directory should be writable by both users and should have the
    # sticky bit set on it to prevent abuse. Obviously this could be extended to
    # as many users as required.
    ;[myshare]
    ; comment = Mary's and Fred's stuff
    ; path = /usr/somewhere/shared
    ; valid users = mary fred
    ; public = no
    ; writable = yes
    ; printable = no
    ; create mask = 0765
    [Themes]
    path = /home/du/Themes
    available = yes
    browsable = yes
    public = yes
    writable = yes
    and another pc with ubuntu with this smb.conf
    # Sample configuration file for the Samba suite for Debian GNU/Linux.
    # This is the main Samba configuration file. You should read the
    # smb.conf(5) manual page in order to understand the options listed
    # here. Samba has a huge number of configurable options most of which
    # are not shown in this example
    # Some options that are often worth tuning have been included as
    # commented-out examples in this file.
    # - When such options are commented with ";", the proposed setting
    # differs from the default Samba behaviour
    # - When commented with "#", the proposed setting is the default
    # behaviour of Samba but the option is considered important
    # enough to be mentioned here
    # NOTE: Whenever you modify this file you should run the command
    # "testparm" to check that you have not made any basic syntactic
    # errors.
    # A well-established practice is to name the original file
    # "smb.conf.master" and create the "real" config file with
    # testparm -s smb.conf.master >smb.conf
    # This minimizes the size of the really used smb.conf file
    # which, according to the Samba Team, impacts performance
    # However, use this with caution if your smb.conf file contains nested
    # "include" statements. See Debian bug #483187 for a case
    # where using a master file is not a good idea.
    #======================= Global Settings =======================
    [global]
    ## Browsing/Identification ###
    # Change this to the workgroup/NT-domain name your Samba server will part of
    workgroup = WORKGROUP
    # server string is the equivalent of the NT Description field
    server string = %h server (Samba, Ubuntu)
    # Windows Internet Name Serving Support Section:
    # WINS Support - Tells the NMBD component of Samba to enable its WINS Server
    # wins support = no
    # WINS Server - Tells the NMBD components of Samba to be a WINS Client
    # Note: Samba can be either a WINS Server, or a WINS Client, but NOT both
    ; wins server = w.x.y.z
    # This will prevent nmbd to search for NetBIOS names through DNS.
    dns proxy = no
    # What naming service and in what order should we use to resolve host names
    # to IP addresses
    ; name resolve order = lmhosts host wins bcast
    #### Networking ####
    # The specific set of interfaces / networks to bind to
    # This can be either the interface name or an IP address/netmask;
    # interface names are normally preferred
    ; interfaces = 127.0.0.0/8 eth0
    # Only bind to the named interfaces and/or networks; you must use the
    # 'interfaces' option above to use this.
    # It is recommended that you enable this feature if your Samba machine is
    # not protected by a firewall or is a firewall itself. However, this
    # option cannot handle dynamic or non-broadcast interfaces correctly.
    ; bind interfaces only = yes
    #### Debugging/Accounting ####
    # This tells Samba to use a separate log file for each machine
    # that connects
    log file = /var/log/samba/log.%m
    # Cap the size of the individual log files (in KiB).
    max log size = 1000
    # If you want Samba to only log through syslog then set the following
    # parameter to 'yes'.
    # syslog only = no
    # We want Samba to log a minimum amount of information to syslog. Everything
    # should go to /var/log/samba/log.{smbd,nmbd} instead. If you want to log
    # through syslog you should set the following parameter to something higher.
    syslog = 0
    # Do something sensible when Samba crashes: mail the admin a backtrace
    panic action = /usr/share/samba/panic-action %d
    ####### Authentication #######
    # "security = user" is always a good idea. This will require a Unix account
    # in this server for every user accessing the server. See
    # /usr/share/doc/samba-doc/htmldocs/Samba3-HOWTO/ServerType.html
    # in the samba-doc package for details.
    # security = user
    # You may wish to use password encryption. See the section on
    # 'encrypt passwords' in the smb.conf(5) manpage before enabling.
    encrypt passwords = true
    # If you are using encrypted passwords, Samba will need to know what
    # password database type you are using.
    passdb backend = tdbsam
    obey pam restrictions = yes
    # This boolean parameter controls whether Samba attempts to sync the Unix
    # password with the SMB password when the encrypted SMB password in the
    # passdb is changed.
    unix password sync = yes
    # For Unix password sync to work on a Debian GNU/Linux system, the following
    # parameters must be set (thanks to Ian Kahan <<[email protected]> for
    # sending the correct chat script for the passwd program in Debian Sarge).
    passwd program = /usr/bin/passwd %u
    passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* .
    # This boolean controls whether PAM will be used for password changes
    # when requested by an SMB client instead of the program listed in
    # 'passwd program'. The default is 'no'.
    pam password change = yes
    # This option controls how unsuccessful authentication attempts are mapped
    # to anonymous connections
    map to guest = bad user
    ########## Domains ###########
    # Is this machine able to authenticate users. Both PDC and BDC
    # must have this setting enabled. If you are the BDC you must
    # change the 'domain master' setting to no
    ; domain logons = yes
    # The following setting only takes effect if 'domain logons' is set
    # It specifies the location of the user's profile directory
    # from the client point of view)
    # The following required a [profiles] share to be setup on the
    # samba server (see below)
    ; logon path = \\%N\profiles\%U
    # Another common choice is storing the profile in the user's home directory
    # (this is Samba's default)
    # logon path = \\%N\%U\profile
    # The following setting only takes effect if 'domain logons' is set
    # It specifies the location of a user's home directory (from the client
    # point of view)
    ; logon drive = H:
    # logon home = \\%N\%U
    # The following setting only takes effect if 'domain logons' is set
    # It specifies the script to run during logon. The script must be stored
    # in the [netlogon] share
    # NOTE: Must be store in 'DOS' file format convention
    ; logon script = logon.cmd
    # This allows Unix users to be created on the domain controller via the SAMR
    # RPC pipe. The example command creates a user account with a disabled Unix
    # password; please adapt to your needs
    ; add user script = /usr/sbin/adduser --quiet --disabled-password --gecos "" %u
    # This allows machine accounts to be created on the domain controller via the
    # SAMR RPC pipe.
    # The following assumes a "machines" group exists on the system
    ; add machine script = /usr/sbin/useradd -g machines -c "%u machine account" -d /var/lib/samba -s /bin/false %u
    # This allows Unix groups to be created on the domain controller via the SAMR
    # RPC pipe.
    ; add group script = /usr/sbin/addgroup --force-badname %g
    ########## Printing ##########
    # If you want to automatically load your printer list rather
    # than setting them up individually then you'll need this
    # load printers = yes
    # lpr(ng) printing. You may wish to override the location of the
    # printcap file
    ; printing = bsd
    ; printcap name = /etc/printcap
    # CUPS printing. See also the cupsaddsmb(8) manpage in the
    # cupsys-client package.
    ; printing = cups
    ; printcap name = cups
    ############ Misc ############
    # Using the following line enables you to customise your configuration
    # on a per machine basis. The %m gets replaced with the netbios name
    # of the machine that is connecting
    ; include = /home/samba/etc/smb.conf.%m
    # Most people will find that this option gives better performance.
    # See smb.conf(5) and /usr/share/doc/samba-doc/htmldocs/Samba3-HOWTO/speed.html
    # for details
    # You may want to add the following on a Linux system:
    # SO_RCVBUF=8192 SO_SNDBUF=8192
    # socket options = TCP_NODELAY
    # The following parameter is useful only if you have the linpopup package
    # installed. The samba maintainer and the linpopup maintainer are
    # working to ease installation and configuration of linpopup and samba.
    ; message command = /bin/sh -c '/usr/bin/linpopup "%f" "%m" %s; rm %s' &
    # Domain Master specifies Samba to be the Domain Master Browser. If this
    # machine will be configured as a BDC (a secondary logon server), you
    # must set this to 'no'; otherwise, the default behavior is recommended.
    # domain master = auto
    # Some defaults for winbind (make sure you're not using the ranges
    # for something else.)
    ; idmap uid = 10000-20000
    ; idmap gid = 10000-20000
    ; template shell = /bin/bash
    # The following was the default behaviour in sarge,
    # but samba upstream reverted the default because it might induce
    # performance issues in large organizations.
    # See Debian bug #368251 for some of the consequences of *not*
    # having this setting and smb.conf(5) for details.
    ; winbind enum groups = yes
    ; winbind enum users = yes
    # Setup usershare options to enable non-root users to share folders
    # with the net usershare command.
    # Maximum number of usershare. 0 (default) means that usershare is disabled.
    ; usershare max shares = 100
    # Allow users who've been granted usershare privileges to create
    # public shares, not just authenticated ones
    usershare allow guests = yes
    #======================= Share Definitions =======================
    # Un-comment the following (and tweak the other settings below to suit)
    # to enable the default home directory shares. This will share each
    # user's home director as \\server\username
    [homes]
    comment = Home Directories
    browseable = yes
    writable = yes
    # By default, the home directories are exported read-only. Change the
    # next parameter to 'no' if you want to be able to write to them.
    read only = no
    # File creation mask is set to 0700 for security reasons. If you want to
    # create files with group=rw permissions, set next parameter to 0775.
    ; create mask = 0700
    # Directory creation mask is set to 0700 for security reasons. If you want to
    # create dirs. with group=rw permissions, set next parameter to 0775.
    ; directory mask = 0700
    # By default, \\server\username shares can be connected to by anyone
    # with access to the samba server. Un-comment the following parameter
    # to make sure that only "username" can connect to \\server\username
    # The following parameter makes sure that only "username" can connect
    # This might need tweaking when using external authentication schemes
    ; valid users = %S
    # Un-comment the following and create the netlogon directory for Domain Logons
    # (you need to configure Samba to act as a domain controller too.)
    ;[netlogon]
    ; comment = Network Logon Service
    ; path = /home/samba/netlogon
    ; guest ok = yes
    ; read only = yes
    ; share modes = no
    # Un-comment the following and create the profiles directory to store
    # users profiles (see the "logon path" option above)
    # (you need to configure Samba to act as a domain controller too.)
    # The path below should be writable by all users so that their
    # profile directory may be created the first time they log on
    ;[profiles]
    ; comment = Users profiles
    ; path = /home/samba/profiles
    ; guest ok = no
    ; browseable = yes
    ; create mask = 0600
    ; directory mask = 0700
    [printers]
    comment = All Printers
    browseable = yes
    path = /var/spool/samba
    printable = yes
    guest ok = no
    read only = yes
    create mask = 0700
    # Windows clients look for this share name as a source of downloadable
    # printer drivers
    [print$]
    comment = Printer Drivers
    path = /var/lib/samba/printers
    browseable = yes
    read only = yes
    guest ok = no
    # Uncomment to allow remote administration of Windows print drivers.
    # You may need to replace 'lpadmin' with the name of the group your
    # admin users are members of.
    # Please note that you also need to set appropriate Unix permissions
    # to the drivers directory for these users to have write rights in it
    ; write list = root, @lpadmin
    # A sample share for sharing your CD-ROM with others.
    ;[cdrom]
    ; comment = Samba server's CD-ROM
    ; read only = yes
    ; locking = no
    ; path = /cdrom
    ; guest ok = yes
    # The next two parameters show how to auto-mount a CD-ROM when the
    # cdrom share is accesed. For this to work /etc/fstab must contain
    # an entry like this:
    # /dev/scd0 /cdrom iso9660 defaults,noauto,ro,user 0 0
    # The CD-ROM gets unmounted automatically after the connection to the
    # If you don't want to use auto-mounting/unmounting make sure the CD
    # is mounted on /cdrom
    ; preexec = /bin/mount /cdrom
    ; postexec = /bin/umount /cdrom
    [Downloads]
    path = /home/Duccio/Downloads
    available = yes
    browseable = yes
    guest ok = yes
    public = yes
    writable = yes
    Both have static ip
    The problem are 2:
    1- the pc cant see eachothers with nautilus under "network" but..
    2- if i type in nautilus in the address bar from the arch pc "smb://ubuntu_ip" i can see shared folders the shared folder in ubuntu pc /home/Duccio/Downloads is browseable but i cant mount folders, the message is "unable to mount location"
    Another pc with ubuntu wich have dhcp is visible under nautilus network
    Last edited by jacopastorius82 (2010-11-02 21:14:51)

    in laptop pc with arch i have installed somewhat in gnome under System-->administration called "shared folder". Maybe this sort of software override manual configuration in /etc/samba/smb.conf?
    Something like that is probably standing in ubuntu as well i suppose..
    Last edited by jacopastorius82 (2010-11-03 22:13:53)

  • Report launch form (qms0012f)

    Dear Headstart Team,
    when using the LOV query (through Headstart Foundation Application) the user is forced to use the LOV
    and is not able to fill in the parameter field (field protected against update).
    If I use the LOV, Value or Description width fields they generate an On Off behaviour. The values you specify are not important.
    When specified the LOV becomes screen wide, disregarding the specified widths. When cleared the LOV returns to it's
    initial behaviour.
    Are both points as intended ?
    With kind regards,
    Gerard.

    Gerard,
    "when using the LOV query (through Headstart Foundation Application) the user is forced to use the LOV
    and is not able to fill in the parameter field (field protected against update). "
    This behavior is intentional. The reason for this is that with Hsd6i, we added a display value that is different from
    the actual value of the parameters. However, in order not to break applications developed with older releases
    of Headstart, the LOV still lists the actual value is still listed first in the lov query. Since the first field in the LOV
    is not the field displayed on the screen, you cannot set Validate in LOV to true. So, the only option is to force
    users to use the LOV to enter data. If you do not have compatibility issues with older applications, you can
    set Validate in LOV to true and change all your LOV queries to put Description first and then value.
    "If I use the LOV, Value or Description width fields they generate an On Off behaviour. The values you
    specify are not important. When specified the LOV becomes screen wide, disregarding the specified
    widths. When cleared the LOV returns to it's initial behaviour. "
    I suspect this might be a new forms bug. I've logged a Headstart bug, just in case we can find a workaround.
    "The manual describes the use of GLOBAL variables for Default Value and Description.
    So far :
    the code for the default value will never work
    name_in(l_default_value)
    should be
    name_in(substr(l_default_value,2))
    in order to remove the colon.
    The code for the Default Description is totally absent."
    I have also logged a Headstart bug for this. We will need to investigate it. Certainly, the mechanism is working
    correctly, so I'm not sure what is going on here.
    Regards,
    Lauri

  • IPhoto 6.01: lost links to pictures (thumbs fine) and forgets rotations too

    First problem is broken links.
    Within my library some photos exhibit strange behaviour, this applies to some (not all) photos within one roll.
    The thumbnails work, both in Library and in Filmstrip. However, when I double click on the image I get a grey background inside which is the dashed outline of a sqaure with an inverse '!' symbol in a grey circle.
    The JPG files are where they should be, and can be previewed and opened within Finder.
    The Permissions are fine, and set properly.
    I have tried Command+Option Open iPhoto and rebuilt the iPhoto library to no avail.
    Second problem (possibley related?) is that not only are rotation forgotten by the software but some are out of sync -- i.e. rotated to one point in thumbs, but rotated to different in the edit window.
    Suggestions please to fix these two problems with iPhoto 6.0.1
    StooMonster

    SM:
    Welcome to the Apple Discussions. Here are several threads regarding the blanks when trying to enter the editing window.
    http://discussions.apple.com/thread.jspa?messageID=1665923#1665923
    http://discussions.apple.com/thread.jspa?threadID=313814&tstart=0
    http://homepage.mac.com/dsjosephson/iphoto6framework/FileSharing24.html
    http://discussions.apple.com/thread.jspa?threadID=316657&tstart=0
    Regarding the second problem, once the first is fixed you can try this:
    1 - make a backup copy of your iPhoto Library folder (always when trying out fixes)
    2 - move the Data folder to the desktop
    3 - launch iPhoto with the Command+Option keys depressed and follow the instructions to rebuild the library. Select the first two options regarding thumbnails.
    G4 DP-1G, 1.5G RAM, 22 Display, 200G HD, 160G HD, 250G FWHD, QT 7.0.4P   Mac OS X (10.4.5)   Canon S400, i850 & LIDE 50, Epson R200, 2G Nano

  • Import from catalog problems

    Cannot import from catalog, LR quits and I get about 2 or 3 photos imported.
    Adobe was no help, they had no idea why.
    Have tried to "import from Cat" twice. Moving two different catalogs into my main desktop catalog. Both did not work.
    All three catalogs ( two from laptop and my main LR cat) work fine and I can import photos no problem.
    Would like to import all the changes of photos that I have made with these two catalogs that I did on my laptop. (Both cats are on an external hard drive).
    Thoughts anyone?
    I am having some issues with my video card. Screen right now is a bit pink. This would be the fourth video card I will need on this Mac Pro. Would this be the cause?
    Thanks.

    flyboys911 wrote:
    Do I need to export the laptop cats if thy are already on a separate hard drive?
    It might help. Exporting a catalog sometimes cures corruption in the original catalog. Try exporting into a new catalog and then importing the new one on your desktop.
    Never heard of the last thing you mentioned. Can you describe more?
    Lightroom has a preferences file, which sometimes gets corrupt and can cause all kinds of weard behaviour. You can delete (or rename away) the preferences file, and LR will build a fresh one when started. If you rename it, you will still have it if you decide you want it back.
    To rename the preferences file, rename the file
    "[username]/Library/Preferences/com.adobe.Lightroom3.plist"
    to something like
    "[username]/Library/Preferences/com.adobe.Lightroom3.plist.old"
    while LR is closed and restart LR.
    Beat

  • Bugreport: LR 2.5 fails to burn files to DVD

    Affects: LR 2.5 on Win XP SP3
    Steps to reproduce:
    1. Select a set of shots that would require 3 DVDs
    2. Export to CD/DVD as original files
    3. Insert blank DVDs when prompted
    Expected result: 3 disks with the selected files on them
    Actual result: Disk 1 and 3 contain the right files. Disk 2 is blank (nothing written, disk still untouched).
    Other observations:
    - When starting to write disk 2, LR "finish" in a couple of seconds and asks for disk 3 just like disk 2 had been written successfully.
    - This is not caused by faulty blank disks. LR accept the same blank disk as #1 or #3 after "writing" to it as disk #2.
    - CD/DVD recording work fine from other applications
    - The behaviour described above is reproducable with 100% reliability
    Micke

    If you're dealing with things with the same granularity as an existing catalog, like your moving the entire catalog and photos to a new place, then just use file COPY operations, at least if you're aware of what and where a catalog database file exists and where the set of photo folders are.
    I use Export to make a catalog that is a PORTION of an existing catalog not a copy of an entire catalog.
    I use Import to ADD photo-settings, en-masse, to an existing catalog, not to view an existing catalog in its entirety.
    I use a combination of both operations to make backups of my current work to a couple external HDs on a periodic basis.  I Export things by month to an external HD, and then import that month's catalog to the external HD's catalog so I have all the photos on that particular backup drive in a single catalog to facilitate searches.
    This lets me have my most recent photos in a single catalog on my fast internal HD.  And I have all the photos on my external backup drive in a single catalog.  And if I really wanted to, I could have all my external HD catalogs (I have several TBs of photos) imported into a monster catalog that is not connected to any photos, to do keyword searches across the entire set, and then go to the individual 1TB drive the photos exists on, based on the date.

  • IPhoto photo stream time zone/ duplicates issue

    I've got a bunch of duplicate photos in my iPhoto library on my Macbook from both manually importing photos from iPhone when connected and from syncing via Photo Stream.
    I've since disactivated Photo Stream, but still had duplicate photos on my computer.
    I've merged all the albums called "October Photo Stream" with the other imports into one big list sorted by date.
    The problem is, I was travelling quite a bit across different time zones, so they're not all really in order. It seems the Photo Stream copy of the photo (the one with the keyword attached) has the time and date as my "home" timezone, not as the "local" timezone as per the time where the photo was taken.
    Is this by design?
    I'd now like to remove all the duplicates and was thinking I'd just search for Photo Stream keyword, then select all those pics and delete.... the problem is I'm not sure which ones actually have duplicates as they don't all have duplicates! I really can't figure out how/why some exist as duplicates and others don't... perhaps something to do with when I was/wasn't near a wifi network (applicable while I was travelling but less so for pictures at home as my wifi is always on at home).
    This photo stream "feature" is confusing as ****, and don't even get me started on the other iCloud services like docs... could they have designed it any more counter intuitive if they tried?
    Any advice/ assistance appreciated.
    Further, if I reactivate photo stream in future, how do I stop this behaviour from happening again?

    I've noticed this same problem. I just got back from a trip overseas and the time on all my iPhone photos imported from the photo stream is wrong by two hours (the time difference). It seems a bit stupid to have the photos time stamped with your home time zone, and it means that when I imported the photos from my camera they're all out of order and I need to adjust the time on all the iPhoto photos. That's not particulary hard, it just seems unncessary. I hope someone form Apple notices this and fixes the problem.

  • How to import multiple XML schemas in Toplink.

    Hi All,
    I try to import multiple Schema(defined by EPCGlobal) in toplink This schema imports few other schemas and all are available in the same location.
    For this I tried to create a project by selecting XML radio button, and specified an XSD. The classes are getting generated for this file. After this, I try to import all other schema's using the Import schema option. I'm able to import it. but, I'm not able to generate classes for the schemas imported recently. Can you please let me know whether this is the expected behaviour..?
    Alternatively, I try to import a Schema(defined by EPCGlobal) in toplink through a TopLink-Jaxb Project. But I get the following exception..
    oracle.xml.parser.schema.XSDException
    at oracle.xml.parser.schema.XSDBuilder.buildSchema(XSDBuilder.java:754)
    at oracle.xml.parser.schema.XSDBuilder.build(XSDBuilder.java:407)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:585)
    at oracle.toplink.workbench.utility.ClassTools.invokeMethodWithException(ClassTools.java:572
    at oracle.toplink.workbench.utility.ClassTools.attemptToInvokeMethodWithException(ClassTools
    .java:198)
    at oracle.toplink.workbench.utility.ClassTools.invokeMethodWithException(ClassTools.java:541
    at oracle.toplink.workbench.utility.ClassTools.invokeMethodWithException(ClassTools.java:526
    at oracle.toplink.workbench.mappingsmodel.schema.MWXmlSchema.reload(MWXmlSchema.java:481)
    at oracle.toplink.workbench.mappingsmodel.schema.MWXmlSchemaRepository.addSchema(MWXmlSchema
    Repository.java:107)
    at oracle.toplink.workbench.mappingsmodel.schema.MWXmlSchemaRepository.createSchemaFromFile(
    MWXmlSchemaRepository.java:89)
    at oracle.toplink.ox.jaxb.compiler.tljaxb.generate(tljaxb.java:70)
    at oracle.toplink.ox.jaxb.compiler.tljaxb.main(tljaxb.java:44)
    I tried to use the "tljaxb.cmd" but there too the same problem occurs.Where the problem could be..? please let me know.
    As I'm struck with this problem, please throw some light in this regard.
    Thanks
    Rajasekaran

    Rajasekaran,
    I have examined your XML schemas and do not believe that the problem is due to the import. Instead it is because the XML schema being imported is itself invalid. I have emailed you the specific correction to your XML schema. It was an error related to the namespace qualification of the type in one of your element declarations.
    -Blaise

  • Hash Join Anti NA

    Another simple day at the office.....
    What was the case.
    A colleague approached me telling that he had two similar queries. One of them returning data, the other not.
    The "simplified" version of the two queries looked like:
    SELECT col1
      FROM tab1
    WHERE col1 NOT IN (SELECT col1 FROM tab2);This query returned no data, however he -and later on I also- was sure that there was a mismatch in the data, which should have returned rows.
    This was also proven/shown by the second query:
    SELECT col1
      FROM tab1
    WHERE NOT EXISTS
              (SELECT col1
                 FROM tab2
                WHERE tab1.col1 = tab2.col1);This query returned the expected difference. And this query does in fact the same as the first query!!
    Even when we hardcoded an extra WHERE clause, the result was the same. No rows for:
    SELECT *
      FROM tab1
    WHERE  tab1.col1 NOT IN (SELECT col1 FROM tab2)
           AND tab1.col1 = 'car';and the correct rows for:
    SELECT *
      FROM tab1
    WHERE     NOT EXISTS
                  (SELECT 1
                     FROM tab2
                    WHERE tab1.col1 = tab2.col1)
           AND tab1.col1 = 'car';After an hour searching, trying to reproduce the issue, I almost was about to give up and send it to Oracle Support qualifying it as a bug.
    However, there was one difference that I saw, that could be the cause of the problem.
    Allthough the statements are almost the same, the execution plan showed a slight difference. The execution plan for the NOT IN query looked like:
    Plan
    SELECT STATEMENT ALL_ROWS Cost: 5 Bytes: 808 Cardinality: 2
    3 HASH JOIN ANTI NA Cost: 5 Bytes: 808 Cardinality: 2
    1 TABLE ACCESS FULL TABLE PIM_KRG.TAB1 Cost: 2 Bytes: 606 Cardinality: 3
    2 TABLE ACCESS FULL TABLE PIM_KRG.TAB2 Cost: 2 Bytes: 404 Cardinality: 2 Whereas the execution plan of the query with the NOT EXISTS looked like:
    Plan
    SELECT STATEMENT ALL_ROWS Cost: 5 Bytes: 808 Cardinality: 2
    3 HASH JOIN ANTI Cost: 5 Bytes: 808 Cardinality: 2
    1 TABLE ACCESS FULL TABLE PIM_KRG.TAB1 Cost: 2 Bytes: 606 Cardinality: 3
    2 TABLE ACCESS FULL TABLE PIM_KRG.TAB2 Cost: 2 Bytes: 404 Cardinality: 2 See the difference?
    Not knowing what a "HASH JOIN ANTI NA" exactly was, I entered it as a search command into the knowledge base of My Oracle Support. Besides a couple of patch-set lists, I also found Document 1082123.1, which explains all about the HASH JOIN ANTI NULL_AWARE.
    In this document the behaviour we saw is explained, with the most important remark being:
    *'If t2.n2 contains NULLs,do not return any t1 rows and terminate'*
    And then it suddenly hit me as I was unable to reproduce the case using my own created test tables.
    In our case, it meant that if tab2.col1 would have contained any rows with a NULL value, the join between those two tables could not be made based on a "NOT IN" clause.
    The query would terminate without giving any results !!!
    And that is exactly what we saw.
    The query with the NOT EXISTS doesn't use a NULL_AWARE ANTI JOIN and therefore does return the results
    Also the mentioned workaround:
    alter session set "_optimizer_null_aware_antijoin" = false;seems not to work. Allthought the execution plan changes to:
    Plan
    SELECT STATEMENT ALL_ROWS Cost: 4 Bytes: 202 Cardinality: 1
    3 FILTER
    1 TABLE ACCESS FULL TABLE PIM_KRG.TAB1 Cost: 2 Bytes: 606 Cardinality: 3
    2 TABLE ACCESS FULL TABLE PIM_KRG.TAB2 Cost: 2 Bytes: 404 Cardinality: 2 it still returns no rows !!
    And Now??
    Since there is a document explaining the behaviour, I'm doubting if we can classify this as a bug. But in my opinion, if developers do not know about this strange behaviour, they will easily call it a bug.
    The "problem" is easily solved ( or worked around ) using the NOT EXISTS solution, or using NVL with the JOINed columns. However I would expect the optimizer to sort these things out himself.
    For anyone who wants to reproduce/investigate this case, I have listed my test-code.
    The database version we used was 11.1.0.7 on Windows 2008 R2. I'm sure the OS doesn't matter here.
    -- Create two tables, make sure they allow NULL values
    CREATE TABLE tab1 (col1 VARCHAR2 (100) NULL);
    CREATE TABLE tab2 (col1 VARCHAR2 (100) NULL);
    INSERT INTO tab1
    VALUES ('bike');
    INSERT INTO tab1
    VALUES ('car');
    INSERT INTO tab1
    VALUES (NULL);
    INSERT INTO tab2
    VALUES ('bike');
    INSERT INTO tab2
    VALUES (NULL);
    COMMIT;
    -- This query returns No results
    SELECT col1
      FROM tab1
    WHERE col1 NOT IN (SELECT col1 FROM tab2);
    -- This query return results
    SELECT col1
      FROM tab1
    WHERE NOT EXISTS
              (SELECT col1
                 FROM tab2
                WHERE tab1.col1 = tab2.col1);I've also written a blog entry with the above text at http://managingoracle.blogspot.com
    Anyone who has the real explanation for this behaviour as in why the HASH-JOIN ANTI terminates. Please elaborate
    Thanks
    Kind Regards
    FJFranken

    fjfranken wrote:
    SELECT col1
    FROM tab1
    WHERE col1 NOT IN (SELECT col1 FROM tab2);
    SELECT col1
    FROM tab1
    WHERE NOT EXISTS
    (SELECT col1
    FROM tab2
    WHERE tab1.col1 = tab2.col1);
    These two queries are NOT logically equivalent - if there are any rows in tab2 where col1 is null then the first query SHOULD return no rows.
    See http://jonathanlewis.wordpress.com/2007/02/25/not-in/ for an explanation of the difference between NOT EXISTS and NOT IN.
    >
    Anyone who has the real explanation for this behaviour as in why the HASH-JOIN ANTI terminates. Please elaborateThe purpose of the null-aware anti-join is to allow a NOT IN subquery that has to deal with the null problem run as an anti-join; historically it would HAVE to run as a FILTER SUBQUERY.
    Regards
    Jonathan Lewis

  • Bapi wrapper- MODIFY Wrapper For SyncBo

    Modify Wrapper :ZGET_AM_P2P For SyncBO
    I have read the requirement :
    -replace entire item data with entries of table parameters
    pls tell me the code will work or not if i put in syncBo
    someone pls help me.....
    FUNCTION ZGET_AM_P2P.
    ""Local interface:
    *"  IMPORTING
    *"     VALUE(V_ANLN) LIKE  ANLA-ANLN1 OPTIONAL
    *"     VALUE(V_STORT) LIKE  ANLZ-STORT OPTIONAL
    *"  EXPORTING
    *"     VALUE(V_MSG) TYPE  STRING
      TABLES : anla, anlz, anlh.
      DATA : v_date LIKE sy-datum,
             var1 like  BALM-MSGV1,
             var2 like  BALM-MSGV2.
      refresh : bdcdata, messtab.
      clear : bdcdata, messtab, v_msg, v_date.
      SELECT SINGLE * FROM anla WHERE anln1 = v_anln
                                  AND bukrs = '1000'.
      SELECT SINGLE * FROM anlz WHERE anln1 = v_anln
                                  AND bukrs = '1000'.
      SELECT SINGLE * FROM anlh WHERE anln1 = v_anln
                                  AND bukrs = '1000'.
      CONCATENATE   anla-aktiv6(02)  anla-aktiv4(02) anla-aktiv(04) INTO v_date.
      CALL FUNCTION 'ZBAPI_AM_P2P'
       EXPORTING
         V_ANLN          = ANLA-ANLN1
         V_STORT         = v_STORT
         V_TXT50         = ANLA-TXT50
         V_ANLHTXT       = ANLH-ANLHTXT
         V_KOSTL         = ANLZ-KOSTL
         V_WERKS         = ANLZ-WERKS
         V_DATE          = v_date
       IMPORTING
         V_MSG           = v_msg
    ENDFUNCTION.
    FUNCTION zbapi_am_p2p.
    ""Local interface:
    *"  IMPORTING
    *"     VALUE(V_ANLN) LIKE  ANLA-ANLN1 OPTIONAL
    *"     VALUE(V_STORT) LIKE  ANLZ-STORT OPTIONAL
    *"     VALUE(V_TXT50) LIKE  ANLA-TXT50 OPTIONAL
    *"     VALUE(V_ANLHTXT) LIKE  ANLH-ANLHTXT OPTIONAL
    *"     VALUE(V_KOSTL) LIKE  ANLZ-KOSTL OPTIONAL
    *"     VALUE(V_WERKS) LIKE  ANLZ-WERKS OPTIONAL
    *"     VALUE(V_DATE) LIKE  SY-DATUM OPTIONAL
    *"  EXPORTING
    *"     VALUE(V_MSG) TYPE  STRING
      DATA :  var1 LIKE  balm-msgv1,
              var2 LIKE  balm-msgv2.
      PERFORM open_group.
      PERFORM bdc_dynpro      USING 'SAPLAIST' '0100'.
      PERFORM bdc_field       USING 'BDC_OKCODE' '=MAST'.
      PERFORM bdc_field       USING 'ANLA-ANLN1' v_anln.
      PERFORM bdc_field       USING 'ANLA-ANLN2' '0'.
      PERFORM bdc_field       USING 'ANLA-BUKRS' '1000'.
      PERFORM bdc_dynpro      USING 'SAPLAIST' '1000'.
      PERFORM bdc_field       USING 'BDC_OKCODE' '=TAB02'.
      PERFORM bdc_field       USING 'ANLA-TXT50' v_txt50.
      PERFORM bdc_field       USING 'ANLH-ANLHTXT' v_anlhtxt.
      PERFORM bdc_field       USING 'ANLA-AKTIV' v_date.
      PERFORM bdc_dynpro      USING 'SAPLAIST' '1000'.
      PERFORM bdc_field       USING 'BDC_OKCODE' '=BUCH'.
      PERFORM bdc_field       USING 'ANLZ-KOSTL' v_kostl.
      PERFORM bdc_field       USING 'ANLZ-WERKS' v_werks.
      PERFORM bdc_field       USING 'ANLZ-STORT' v_stort.
      PERFORM bdc_dynpro      USING 'SAPLAIST' '3020'.
      PERFORM bdc_field       USING 'BDC_OKCODE' '=YES'.
      CALL TRANSACTION 'AS02' USING bdcdata MODE 'N'
                                            UPDATE 'S'
                                            MESSAGES INTO messtab.
      PERFORM close_group.
      READ TABLE messtab INDEX 1.
      MOVE messtab-msgv1 TO var1.
      MOVE messtab-msgv2 TO var2.
      CLEAR v_msg.
      CALL FUNCTION 'MESSAGE_PREPARE'
           EXPORTING
               language               = 'E'
                msg_id                 = messtab-msgid
                msg_no                 = messtab-msgnr
                msg_var1               = var1
                msg_var2               = var2
            MSG_VAR3               = ' '
            MSG_VAR4               = ' '
          IMPORTING
               msg_text               = v_msg
           EXCEPTIONS
                function_not_completed = 1
                message_not_found      = 2
                OTHERS                 = 3.
    ENDFUNCTION.
    Message was edited by:
            yzme yzme

    Hi Saptak,
    Good to see that you had gone through earlier posts & SAP notes.
    Record not on device errors occurs when a mobile user tries to modify a record which currently doesn't belongs to him anymore. Since you have an S01 syncBO, the object he is trying to modify now is deleted from MI. ( This doesn't mean the object is deleted from backend. Basically the Getlist bapi is now not returning this object)
    This occurs only if the sync mode is "async". There is nothing to be worried about. This is just an expected behaviour. If you feel the update is important and needs to be sent to the backend, set the parameter REPROCESS_ON_NO_DATA to 'X' and reprocess the worklist.
    When you reprocess, the Getdetail bapi will be called and the user changes are merged to the latest backend data. No conflict check done and hence there is a chance of backend changes getting overwritten. If the particular record is only updated by mobile user, then there is nothing to worry..
    Let me know if you need more clarifications...
    Regards
    Ajith

  • OAUG Responds to Oracle Proposal

    An Open Letter to Mark Jarvis and Ron Wohl
    Mark, Ron
    Thank you for setting out your proposal to the OAUG on AppsNet. It was useful to see it in writing, although, as you mentioned, it is largely similar to what was proposed and presented to our membership a year ago. As we had a teleconference a few hours before publication it would have been useful to have been given some advance notice of your plan to publish.
    However as this has occurred I would like to clear up some errors and omissions in the document.
    1. You state that the OAUG would select 100% of the user papers. You omit to mention that would represent 25% of the total conference content, and that 75% would therefore not be user papers (as explained to us in the conference call at 9am PST yesterday, Wednesday 23 May).
    2. You state that no proposals were ever put forward by the OAUG board. In the 23 May conference call, just hours before your AppsNet posting, we had proposed that if you were able to support a Fall North American OAUG conference, we would be willing to collaborate with you in line with your proposals for the Spring in North America as well as in Europe and Asia Pacific.
    3. You state that you would be required to take 200 developers out of development for a week of OAUG, and before you know it we would have 400 Oracle employees engaged in each conference. Perhaps you had forgotten that a few hours earlier we had indicated that the level of support that we were requesting for the Fall conference was development staff to cover 55 sessions. Given that the next two Fall OAUG events are on the West Coast, even with travel and preparation time this represents about 120 people days, rather than the 2800 employee weeks that you refer to.
    4. Twice you state that there are going to be seven conferences in the next twelve months, and refer to four OAUG conferences. When you asked on the conference call if we would participate with you in Europe, we said yes, we would. When you asked where our venue was for Europe next year, we told you that in light of the current situation we had released it. When we discussed Asia Pacific we told you that we intended to hold our conference in Sydney, but that we did not believe that it would affect your conference in Singapore which is several months and several thousand miles away. We agreed that we would be willing to discuss the model in Asia Pacific with you, as you had indicated the possibility of an alternative style event. We were not clear how that would work. In addition, given our willingness to discuss the North American model, you were well aware that seven conferences in the next twelve months was most unlikely.
    5. You stated that for the past five years a majority of Oracle employees attending and staffing OAUG conferences were required to pay full attendance. I dont propose to count, but for the record, approximately sixty were free, any Oracle presenters above the sixty were free, and the conference workers paid a reduced rate to cover the costs involved, such as food. Any others who decided to attend did indeed have to pay.
    It is hard for us to understand why it has been so difficult to bring this issue to closure. The OAUG Board understands that once Oracle had committed to its AppsWorld events for Spring 2001 that you were too busy to meet with us to discuss a longer term resolution, for example when we proposed a meeting with you in January. It was unfortunate that you were unable to meet the target date of meeting in March after your conference. We were pleased that we finally met on 30 April, and that we appeared to be making some progress in identifying issues and seeking to resolve them. It was disappointing that you had to postpone the agreed follow up meeting from the first week in May to 23 May, and that only then were you able to inform us that you had a deadline for just a week later.
    We had sought to avoid going into details on the discussions in public, as we felt that might jeopardise progress. I am not clear what your purpose was in doing so. We continue to be disappointed by Oracles inability to compromise, and that after all of this time, this proposal so closely resembles the original proposal made last year. I would like to make quite clear what the OAUG has offered, and the flexibility that we have shown in being willing to move from the existing, successful model, to try to accommodate your valid needs to reach a wider audience.
    The OAUG has proposed that :
    7 The OAUG would be willing to collaborate with Oracle on their AppsWorld events in North America, if in return Oracle would participate in the OAUGs independent Fall North American conference to the extent of approximately sixty development staff.
    7 The OAUG, in line with the majority of feedback received from European users, would be willing to collaborate with Oracle on their AppsWorld event in Europe.
    7 The OAUG, after the Sydney Asia Pacific event which we have committed to in 2001, would be willing to collaborate with Oracle in their AppsWorld event in Asia Pacific, subject to clarification as to the form of that event.
    I believe that is a more than reasonable proposal.
    Given that Oracle decided to move these discussions into the public arena, that they did so in such an extensive way just a few hours after a discussion with the OAUG, and without giving any advance warning, I am forced to wonder whether Oracle are really negotiating in good faith in this matter
    The OAUG has always stood for the independent voice of the user community. We have received consistent and clear feedback from our members that they wish have a forum for education and feedback which is, and can clearly be seen to be, independent from the Oracle message. We recognise Oracles marketing needs, and believe that our proposed compromise meets those needs, and the stated needs of existing users. If Oracle really had the interests of the users at heart it would not behave in this way. However such behaviour by Oracle only goes to reinforce the importance and value of a truly independent users group.
    Jeremy Young
    OAUG President
    null

    An Open Letter to Mark Jarvis and Ron Wohl
    Mark, Ron
    Thank you for setting out your proposal to the OAUG on AppsNet. It was useful to see it in writing, although, as you mentioned, it is largely similar to what was proposed and presented to our membership a year ago. As we had a teleconference a few hours before publication it would have been useful to have been given some advance notice of your plan to publish.
    However as this has occurred I would like to clear up some errors and omissions in the document.
    1. You state that the OAUG would select 100% of the user papers. You omit to mention that would represent 25% of the total conference content, and that 75% would therefore not be user papers (as explained to us in the conference call at 9am PST yesterday, Wednesday 23 May).
    2. You state that no proposals were ever put forward by the OAUG board. In the 23 May conference call, just hours before your AppsNet posting, we had proposed that if you were able to support a Fall North American OAUG conference, we would be willing to collaborate with you in line with your proposals for the Spring in North America as well as in Europe and Asia Pacific.
    3. You state that you would be required to take 200 developers out of development for a week of OAUG, and before you know it we would have 400 Oracle employees engaged in each conference. Perhaps you had forgotten that a few hours earlier we had indicated that the level of support that we were requesting for the Fall conference was development staff to cover 55 sessions. Given that the next two Fall OAUG events are on the West Coast, even with travel and preparation time this represents about 120 people days, rather than the 2800 employee weeks that you refer to.
    4. Twice you state that there are going to be seven conferences in the next twelve months, and refer to four OAUG conferences. When you asked on the conference call if we would participate with you in Europe, we said yes, we would. When you asked where our venue was for Europe next year, we told you that in light of the current situation we had released it. When we discussed Asia Pacific we told you that we intended to hold our conference in Sydney, but that we did not believe that it would affect your conference in Singapore which is several months and several thousand miles away. We agreed that we would be willing to discuss the model in Asia Pacific with you, as you had indicated the possibility of an alternative style event. We were not clear how that would work. In addition, given our willingness to discuss the North American model, you were well aware that seven conferences in the next twelve months was most unlikely.
    5. You stated that for the past five years a majority of Oracle employees attending and staffing OAUG conferences were required to pay full attendance. I dont propose to count, but for the record, approximately sixty were free, any Oracle presenters above the sixty were free, and the conference workers paid a reduced rate to cover the costs involved, such as food. Any others who decided to attend did indeed have to pay.
    It is hard for us to understand why it has been so difficult to bring this issue to closure. The OAUG Board understands that once Oracle had committed to its AppsWorld events for Spring 2001 that you were too busy to meet with us to discuss a longer term resolution, for example when we proposed a meeting with you in January. It was unfortunate that you were unable to meet the target date of meeting in March after your conference. We were pleased that we finally met on 30 April, and that we appeared to be making some progress in identifying issues and seeking to resolve them. It was disappointing that you had to postpone the agreed follow up meeting from the first week in May to 23 May, and that only then were you able to inform us that you had a deadline for just a week later.
    We had sought to avoid going into details on the discussions in public, as we felt that might jeopardise progress. I am not clear what your purpose was in doing so. We continue to be disappointed by Oracles inability to compromise, and that after all of this time, this proposal so closely resembles the original proposal made last year. I would like to make quite clear what the OAUG has offered, and the flexibility that we have shown in being willing to move from the existing, successful model, to try to accommodate your valid needs to reach a wider audience.
    The OAUG has proposed that :
    7 The OAUG would be willing to collaborate with Oracle on their AppsWorld events in North America, if in return Oracle would participate in the OAUGs independent Fall North American conference to the extent of approximately sixty development staff.
    7 The OAUG, in line with the majority of feedback received from European users, would be willing to collaborate with Oracle on their AppsWorld event in Europe.
    7 The OAUG, after the Sydney Asia Pacific event which we have committed to in 2001, would be willing to collaborate with Oracle in their AppsWorld event in Asia Pacific, subject to clarification as to the form of that event.
    I believe that is a more than reasonable proposal.
    Given that Oracle decided to move these discussions into the public arena, that they did so in such an extensive way just a few hours after a discussion with the OAUG, and without giving any advance warning, I am forced to wonder whether Oracle are really negotiating in good faith in this matter
    The OAUG has always stood for the independent voice of the user community. We have received consistent and clear feedback from our members that they wish have a forum for education and feedback which is, and can clearly be seen to be, independent from the Oracle message. We recognise Oracles marketing needs, and believe that our proposed compromise meets those needs, and the stated needs of existing users. If Oracle really had the interests of the users at heart it would not behave in this way. However such behaviour by Oracle only goes to reinforce the importance and value of a truly independent users group.
    Jeremy Young
    OAUG President
    null

  • OAUG Responds to Oracle's Proposal

    An Open Letter to Mark Jarvis and Ron Wohl
    Mark, Ron
    Thank you for setting out your proposal to the OAUG on AppsNet. It was useful to see it in writing, although, as you mentioned, it is largely similar to what was proposed and presented to our membership a year ago. As we had a teleconference a few hours before publication it would have been useful to have been given some advance notice of your plan to publish.
    However as this has occurred I would like to clear up some errors and omissions in the document.
    1. You state that the OAUG would select 100% of the user papers. You omit to mention that would represent 25% of the total conference content, and that 75% would therefore not be user papers (as explained to us in the conference call at 9am PST yesterday, Wednesday 23 May).
    2. You state that no proposals were ever put forward by the OAUG board. In the 23 May conference call, just hours before your AppsNet posting, we had proposed that if you were able to support a Fall North American OAUG conference, we would be willing to collaborate with you in line with your proposals for the Spring in North America as well as in Europe and Asia Pacific.
    3. You state that you would be required to take 200 developers out of development for a week of OAUG, and before you know it we would have 400 Oracle employees engaged in each conference. Perhaps you had forgotten that a few hours earlier we had indicated that the level of support that we were requesting for the Fall conference was development staff to cover 55 sessions. Given that the next two Fall OAUG events are on the West Coast, even with travel and preparation time this represents about 120 people days, rather than the 2800 employee weeks that you refer to.
    4. Twice you state that there are going to be seven conferences in the next twelve months, and refer to four OAUG conferences. When you asked on the conference call if we would participate with you in Europe, we said yes, we would. When you asked where our venue was for Europe next year, we told you that in light of the current situation we had released it. When we discussed Asia Pacific we told you that we intended to hold our conference in Sydney, but that we did not believe that it would affect your conference in Singapore which is several months and several thousand miles away. We agreed that we would be willing to discuss the model in Asia Pacific with you, as you had indicated the possibility of an alternative style event. We were not clear how that would work. In addition, given our willingness to discuss the North American model, you were well aware that seven conferences in the next twelve months was most unlikely.
    5. You stated that for the past five years a majority of Oracle employees attending and staffing OAUG conferences were required to pay full attendance. I dont propose to count, but for the record, approximately sixty were free, any Oracle presenters above the sixty were free, and the conference workers paid a reduced rate to cover the costs involved, such as food. Any others who decided to attend did indeed have to pay.
    It is hard for us to understand why it has been so difficult to bring this issue to closure. The OAUG Board understands that once Oracle had committed to its AppsWorld events for Spring 2001 that you were too busy to meet with us to discuss a longer term resolution, for example when we proposed a meeting with you in January. It was unfortunate that you were unable to meet the target date of meeting in March after your conference. We were pleased that we finally met on 30 April, and that we appeared to be making some progress in identifying issues and seeking to resolve them. It was disappointing that you had to postpone the agreed follow up meeting from the first week in May to 23 May, and that only then were you able to inform us that you had a deadline for just a week later.
    We had sought to avoid going into details on the discussions in public, as we felt that might jeopardise progress. I am not clear what your purpose was in doing so. We continue to be disappointed by Oracles inability to compromise, and that after all of this time, this proposal so closely resembles the original proposal made last year. I would like to make quite clear what the OAUG has offered, and the flexibility that we have shown in being willing to move from the existing, successful model, to try to accommodate your valid needs to reach a wider audience.
    The OAUG has proposed that :
    7 The OAUG would be willing to collaborate with Oracle on their AppsWorld events in North America, if in return Oracle would participate in the OAUGs independent Fall North American conference to the extent of approximately sixty development staff.
    7 The OAUG, in line with the majority of feedback received from European users, would be willing to collaborate with Oracle on their AppsWorld event in Europe.
    7 The OAUG, after the Sydney Asia Pacific event which we have committed to in 2001, would be willing to collaborate with Oracle in their AppsWorld event in Asia Pacific, subject to clarification as to the form of that event.
    I believe that is a more than reasonable proposal.
    Given that Oracle decided to move these discussions into the public arena, that they did so in such an extensive way just a few hours after a discussion with the OAUG, and without giving any advance warning, I am forced to wonder whether Oracle are really negotiating in good faith in this matter
    The OAUG has always stood for the independent voice of the user community. We have received consistent and clear feedback from our members that they wish have a forum for education and feedback which is, and can clearly be seen to be, independent from the Oracle message. We recognise Oracles marketing needs, and believe that our proposed compromise meets those needs, and the stated needs of existing users. If Oracle really had the interests of the users at heart it would not behave in this way. However such behaviour by Oracle only goes to reinforce the importance and value of a truly independent users group.
    Jeremy Young
    OAUG President
    null

    Have you tried putting something like Apache infront of the ORDS and have /ords setup in there to need http basic?

  • Need some modelinng help...

    Hello,
    I have model as shown below
    SF3 joining M:1 on D1 and SF1, SF2 joining 1:M on D1, which in turn joins 1:M on F1 and F2.
    I have put aggregations on Measure1 and Measure2 for F1 and F2, so that when user pulls columns from all the dimensions they work together.
    My problem is,
    1. When user pulls columns from SF1, SF2 and SF3 along with F1 and F2, the SUM on measures multiplies the results based on 1:M factor on D1 to SF3.
    2. If I put the level based measures, the queries fired pick up 1st row by partitioning over selected level and then joins the results from two facts. This results in first row showing correct values, but next M-1 rows showing '0' or null against more rows coming from second fact.
    3. If I don't put the aggregates (and no level based measure), I get error on UI when columns from any of the dimensions are pulled along with measures from both the facts, which is OBIEE behaviour (I guess)
    4. I can't import the tables in LTS as some of the sites have suggested, as the data volume is rather big, and every time the query will join all the tables, which is not good.
    In short, I want to pull column from any of the tables mentioned above without getting error on the UI and user does not want results to multiply. Any suggestions, please.
    Thanks

    No sure exactly I used this post to help me connect to the
    GoDaddy server.
    http://www.adobe.com/cfusion/webforums/forum/messageview.cfm?forumid=12&catid=189&threadid =890853&highlight_key=y&keyword1=access%20godaddy#3186299

  • ICal wont stay on the second monitor

    Hi all,
    right now I´m working with a two monitor setup - my MBP and a display. I would like to have iCal on the display - but everytime I start iCal it starts up on the MBP display and I have to drag in manually to the display, so it can´t remember its previous location like other applications as for excample Safari can.
    Any ideas how to fix this? I tried to delete the ical.plist file but no change in behaviour.
    Best regards - Christoph

    import javax.swing.*;
    import java.awt.*;
    class frmFullScreen
         JFrame frame;
         public static void main(String[] args)
              frmFullScreen f=new frmFullScreen();
         frmFullScreen()
              frame=new JFrame("TEST");
              frame.setUndecorated(true);
              frame.setSize(Toolkit.getDefaultToolkit().getScreenSize());
              frame.setVisible(true);
    }

Maybe you are looking for