Confusing about Message filter and Content filter

I have a message filter do quarantine action:
badbody: if body-dictionary-match("badbody", 1) {
quarantine ("Policy");
deliver();
also I write a content filter 'good' to see what spams are missed by Ironport Antispam:
Conditions (only if all conditions match):
header("X-IronPort-Quarantine") != "^Policy$"
header("X-Spam-flag") != "^(?i)YES$"
Action:
duplicate-quarantine ("good")
deliver()
I think these two rules could not occur both, because the filter badbody had sent the spam to quarantine 'Policy',
there's no possible to dumplicate to qurantine 'good'.
But it happens:
Tue Jun 17 18:52:55 2008 Info: New SMTP ICID 26146919 interface InNet (10.68.2.161) address 61.135.132.136 reverse dns host websmtp.sohu.com verified no
Tue Jun 17 18:52:55 2008 Info: ICID 26146919 ACCEPT SG ICP match .sohu.com SBRS 5.5
Tue Jun 17 18:52:55 2008 Info: Start MID 10698519 ICID 26146919
Tue Jun 17 18:52:55 2008 Info: MID 10698519 ICID 26146919 From: <mia_kma3998>
Tue Jun 17 18:52:55 2008 Info: MID 10698519 ICID 26146919 RID 0 To: <swordhuihui>
Tue Jun 17 18:52:55 2008 Info: MID 10698519 Message-ID '<10849536>'
Tue Jun 17 18:52:55 2008 Info: MID 10698519 Subject '=?GB2312?B?1Pa807z7zsU=?='
Tue Jun 17 18:52:55 2008 Info: MID 10698519 ready 1452582 bytes from <mia_kma3998>
Tue Jun 17 18:52:56 2008 Info: MID 10698519 matched all recipients for per-recipient policy DEFAULT in the inbound table
Tue Jun 17 18:52:56 2008 Info: MID 10698519 was too big (1452582/102400) for scanning by CASE
Tue Jun 17 18:52:56 2008 Info: Start MID 10698528 ICID 0
Tue Jun 17 18:52:56 2008 Info: MID 10698528 was generated based on MID 10698519 by duplicate-quarantine filter 'good'
Tue Jun 17 18:52:56 2008 Info: MID 10698528 ICID 0 From: <mia_kma3998>
Tue Jun 17 18:52:56 2008 Info: MID 10698528 ICID 0 RID 0 To: <swordhuihui>
Tue Jun 17 18:52:56 2008 Info: MID 10698528 ready 1452584 bytes from <mia_kma3998>
Tue Jun 17 18:52:56 2008 Info: MID 10698528 quarantined to "good" (duplicated by content filter:good)
Tue Jun 17 18:52:56 2008 Info: MID 10698519 quarantined to "Policy" (message filter:flg1)
Tue Jun 17 18:52:59 2008 Info: ICID 26146919 close
The log shows the quarantine action of message filter take effect after the content filter action. I'm quite confused.
Any suggestion?

The original message was marked to go to the "Policy" system quarantine via the message filter. However, that message continues through the email pipeline. If no other action affects that message(i.e. dropped by Sophos anti-virus), then the system will move the message to the "Policy" quarantine as originally marked.
However, in your case, the message was marked to be sent to the "Policy" system quarantine, and then it matched your content filter and did two things:
1. spawned a copy of the original message and sent this new one to the "good" system quarantine. (see MID 10698528)
2. the original copy was left alone and this one was sent to the "Policy" quarantine. If you had a drop() action, then it would have gotten dropped and you would have been left with the single copy from #1 (see MID 10698519)
What was the intended behavior you were trying to achieve?
Here are some references that may help:
1. Where can I see a diagram of the IronPort email pipeline?
You can find a diagram of the queue sequence if you click on the Help
link in the top right of the web interface - it takes a while for it to
load. Find the section "Understanding the Email Pipeline" and then
under that "Overview: Email Pipeline".

Similar Messages

  • Confused about message downloading and sychronization

    I'm new to TB- in the past I have always used a simple web interface. My server is on an IMAP server (Dreamhost)
    I've read the "Configuration Options for Downloading Messages" page, but either don't understand it or it doesn't give enough details.
    I want to download all my messages stored on my server to my laptop- not just the headers. I'd like to preserve the existing folder configuration that I have on my server. I don't want TB to ever delete any message unless I specifically tell it do so, preferably with confirmation.
    My goal is to download all my currently stored messsages and folders, then once I know they are safe on my harddrive I will selectively delete many of them from the server.
    I'm not understanding how to make TB do that. It seems it is only downloading headers and I can't see how to make it download the entire message. Also, I don't quite want what I think it considers sychronization- if I delete a message on the server I don't necessarily want TB to delete that message locally. How else could I archive my messages locally?
    in other words, I want all my messages to be stored locally, and only some of them left on the server.
    For instance, I place three emails into a folder on my server. I quit that
    connection, then invoke Icedove on my laptop. If I click on that folder it
    then retrieves and displays those three emails (or just the headers?).
    If I then quit Icedove and return to my web based email and remove those
    three emails from the server, I expect those three emails to stay on my
    local machine. Yet when I invoke Icedove again, it sees that the three emails are
    gone from the server and subsequently removes them from display....
    So, does this mean that the messages were never actually retrieved
    locally? Or is Icedove removing the local copies in order to synchronize
    with the server?
    Here are the settings I am using:
    under "Copies & Folders" -> keep message archives in: "Archives folder
    on": Local Folders"
    under "Synchronization & Storage" ->"Message Synchronizing" -> keep
    messages for this account on this machine. I have all the folders checked
    under "Advanced"
    under "Disk Space" -> Synchronize all messages regardless of age. And
    "Don't delete any messages". It doesn't give me an option to not use
    synchronization.
    Thanks if you can help,
    Keith

    Keith, Checking your mail is available for you to read while offline is the easiest way to tell if you have a full offline synchronization of your mail. That is the extent of it in this context.
    The 1.2M inbox file will be an MBOX file,
    In Thunderbird use the view menu (Alt +V) to ensure that folders is set to all.
    In the list to the left you will now (if not before) see "Local Folders" with it's inbox send etc. This is not part of your IMAP account and mail located there is not affected by IMAP synchronization. As a matter of fact moving mail there deletes it from the IMAP server, copying does not.
    Having seen the size of your mail (I assume 1.2M is indicative) there is no need for the import export tools (sorry) just select all all the contents of a folder (Ctrl + A) and right click and select copy to and choose a location under Local folders (You can create your own folders there beforehand.)
    Now my change of heart re Import/Export. Copying and moving masses of mails in an IMAP situation can cause the synchronization to basically loose the plot. But a couple of Thousand mails should not be sufficient to cause an issue. The import export tools is a work around where the mail leaves Thunderbird and returns to the new location without synchronization.

  • HT3529 I am confused about messaging. Shouldn't I be able to send and receive messages with cellular data off?  I can usually send the message, but often do not receive messages until I either have wifi or turn cellular on.

    I am confused about messaging. Shouldn't I be able to send and receive messages with cellular data off and without wifi? I can send the message but often do not receive messages from others, iPhone or other, until  I turn on cellular or wifi.

    Depends on the type of Message.
    SMS messages will send and receive with data off.  and while you can guarantee you send using SMS you cannot guarantee that whoever replies to you does also. They may be replying thorugh iMessage if they are using iPhones.
    However Android should be sending through SMS.
    You can turn off iMessage if you want to, though people with limtied SMS text messaging in their plans may not appreciate it, and stop messaging you.

  • Just bought a Nikon d750 and confused about adobe LR4 and PS6 support for the RAW files. I have DNG 8.7 but wondering if LR and PS will import direct soon Thanks for any advice

    Just bought a Nikon d750 and confused about adobe LR4 and PS6 support for the RAW files. I have DNG 8.7 but wondering if LR and PS will import direct soon Thanks for any advice

    Support for the Nikon D750 was introduced in the latest version of LR 5.7 and ACR 8.7 on Novemder 18th 2014.
    Further updates to LR 4 were stopped when LR 5 was released on June 9th 2013. No further updates for bug fixes and new camera support.
    Nada, LR 4 will never support Nikon D750. The Nikon D750 was introduced into the market in September 2014 some 15 months after further development of LR 4 was discontinued.
    You can use the Adobe DNG program (free download for the package) to convert the Nef (raw) files from your Nikon D750 to the Adobe DNG format which will permit you to import those into LR 4. This is the crutch provided by Adobe to allow for the processing of raw files with outdated versions of LR and ACR.
    You can also update the ACR plugin for PS CS6 to version 8.7 which can also work with the raw files from the D750. For direct support in Lightroom you will need to upgrade (paid) to version 5.7.

  • I'm horribly confused about student licensing and commercial use

    As the title says I'm horribly confused about student licensing and using it for commercial use.
    I currently have a Student Licensing version of Adobe Creative Suite 4 that I purchased through my school's journeyEd portal.
    Seeing how CS5 is now out I was browsing looking at prices (why not upgrade while I'm still a student, right?) and while browsing I bumped into one source that says that Student Licensing can not be used for commercial purposes, and this is when the confusion started. I remember reading before that we are able to use student licensing for commercial purposes, okay time to google search. I found one Adobe FAQ that says I can. .
    http://www.adobe.com/education/students/studentteacheredition/faq.html
    " Can I use my Adobe Student and Teacher Edition software for commercial use?
    Yes. You may purchase a Student and Teacher Edition for personal as well as commercial use. "
    and I found this old thread;
    http://forums.adobe.com/thread/314304
    Where an poster listed as an employee of Adobe states
    "There is no upgrade from the CS3 Educational Edition to the comparable CS3 editions sold in non-academic environments. If you have an educational version of for CS3 obtained legitimately (i.e., you qualified for the educational version when you obtained it), you may continue to use that software for the indefinite future, even for commercial use! You cannot sell or otherwise transfer that license, though! When the next version of the Creative Suite is released, you will have two choices: (1) If you still qualify for the educational version, you can buy a copy of that next version (there is no special upgrade pricing from one educational version to another; the price is already very low) or (2) you can upgrade from the educational version of CS3 to the full version of the next version of the Creative Suite as an upgrade from CS3 at the prices published at that time. "
    Okay cool, hmm what this? Adobe is asking me if I want to IM with live costumer service agent, sure why not? Then the conversation started and I asked her my question about using my CS4 license for commercial use, she asks for my product code and email to verify my product, then informs me I can purchase the upgrade version of CS5 and use that for commercial, okay great, but not really answering my question. I reword it and give her a link to that FAQ page it goes like this. ..
    "[CS Rep] : [My name], I would like to inform you that Adobe Student and Teacher Editions are not allowed for
    commercial use.
    [CS Rep] : However, you can upgrade your current software to a normal upgrade version, and you can continue
    using it for commercial purpose.
    [Me] : Then is the FAQ page mistaken? Because it is very misleading if it is. But thank you for the information.
    [CS Rep] : You are welcome.
    [CS Rep] : I apologize for the misleading information in the FAQ."
    . .And after that, I went back to being confused.
    SO my questions are. . . Can I or can't I use my Adobe Creative Suite 4 student licensing for commercial purposes? and If I purchase a Student Licensing of CS5 can I use that for commercial purposes as well?
    Sorry for the long post, I just want to be perfectly clear on what I can and can not do with my purchase.

    The rules differ in various parts of the world. In North America you can use it for commercial work.
    There are no student/academic upgrades. The pricing is so low that in many cases you're better off buying another full student license but you are eligible for upgrade pricing for commercial versions once you're out of school.
    You may not transfer the student license in any way.
    Bob

  • Question about message filter

    I want to archive all email by using message filter.
    The filter is the following:
    archiveallmail:
    if( rcpt-to == "@mydomain\\.com")
    archive("backup");
    So all mails will be archived to the directory "backup". I would like to seperate different users. For example:
    if ( rcpt-to == "@mydomain\\.com")
    archive(rcpt-to);
    but it will occurr error because rcpt-to is not a string. How can I archive email to separate directories according to recipient address.

    This cannot be done to my knowledge. You would need to enter the $ sign for the variable. But the filter is only looking for a string.
    The $ will also not work as the name of the directory is named after the string name.
    I wouldn't archive all messages in the IronPort. You can use the bcc option to send it to a different mail host
    Regards,
    Mark

  • Confusion about UEFI/BIOS and GPT/MBR compatibility issues

    So a user said in another post that they were NOT able to boot in UEFI mode and install Fedora even though it is UEFI compatible.  But this person was able to useLegacy mode and install Fedora, and furthermore, was able to "keep the Windows partition."  I'm guessing that means that Win 8 that came with it, which would be installed in UEFI/GPT mode, correct?  I'm specificly referring to the Y510P but from what I understand *every* laptop that comes preinstalled with Win 8 must be UEFI/GPT.
    So the way I understand it is that the installed Fedora is in "BIOS"/GPT mode rather than "BIOS"/MBR mode because you can't have both GPT and MBR on the same disc.
    I have just started learning about this UEFI/BIOS and MBR/GPT nonsense, and it's going to drive me crazy until I finally understand it. So I guess what I'm asking is:
    1) When I get my y510p I assume it will be in UEFI/GPT mode. Can I install non-UEFI distros alongside it as I take it was done with Fedora?
    2) If I install a linux distro alongside Win 8, do I have to worry about compatibility issues with a drive that is in GPT format? Or does the MBR/GPT issue have nothing to do with it, so I don't have to worry about ever changing my drive to MBR?  
    For example, I read that Win 7 must be installed either as BIOS/MBR or UEFI/GPT.  This can not be mixed and matched.  This means that if I could not get the Windows 7 installer to boot in UEFI I would have to install as MBR.  This also means I would have to format the drive and reinstall Win 8 on the MBR.  
    So my question is do other OS's like Linux have these restrictions?  (For example, if a particular distro will not boot in UEFI and therefor MUST install on MBR)
    3) I have a pendrive with YUMI installed with a ton of distros/tools/Win installs/etc. (It is a USB boot tool like unetbootin that allow you to add multiple bootable images.) When I tried it recently on my dad's laptop.  I have used it many, many times with my older computers, none of which were UEFI.  It works great.  Now that I've had a recent encounter with my dad's ASUS Windows 8 computer (not with the y510p yet) I found out that UEFI seems to be complicating the crap out of things (for me, at least.)
    So when I used this computer, I noticed that when I boot (with legacy mode enabled) and enter the "boot selection screen" in order to boot with USB, I have two options a) UEFI:"name of usb" and b) "name of USB". The UEFI option would NOT boot, but it would boot without the UEFI: option.
    So does this mean that I am booting in non-UEFI mode and once I have booted this way and choose a distro to install that it CANNOT install in UEFI mode?  I recently saw a tool called Rufus that I have yet to try that has an option to set the bootable USB to UEFI, so that would possibly work if I wanted to install a UEFI compatible distro (Arch linux is what I'm wanting.)
    4)  If installing a UEFI compatible distro (such as Arch) requires that the USB device be able to boot in UEFI mode has anyone been able to do this?  Has anyone even been able to boot a device in UEFI mode to do *anything* such as run a live linux?  
    I'm 99% sure I would be able to boot in legacy mode and run a live linux (because I did so on my dad's computer) but the problems arise when I consider how to INSTALL.
    I would really like to know the answers to these questions (as scattered as they are.) Any help would be appreciated!
    Unnecessary info:
    (I started learning about BIOS/UEFI and MBR/GPT the hard way a few days ago by trying for hours to install Windows 7 on my dad's Windows 8 laptop because I could NOT get Win 7 installation to work...it kept asking for drivers before I could install until I finally used the Windows USB install tool, put the stick in a different USB, AND formatted the drive as MBR because Windows 7 would NOT install on the existing GPT drive until I used diskpart.exe -clean. And I have read that Win 7 64 bit will work fine on a UEFI/GPT setup. I used the Windows 7 USB boot tool which did NOT give me a UEFI: and regular option. It showed up simply as "name of usb" without a UEFI in front. Since I read that Windows 7 must either be in BIOS/MBR mode or UEFI/GPT mode that this drive would not boot in UEFI mode, and I don't know why...Although I believe I read that Win 7 cannot be installed from a USB in UEFI/GPT mode, only BIOS/MBR.  UEFI/GPT mode requires a DVD install but I did not have a drive to test this.)

    I have a Y510p which is running dual boot Windows 8.1 and Arch Linux.   I think that it is strongly advised to do plenty of reading ahead of any install if you will be using UEFI and Linux so that you understand all the issued before making critical changes to the existing system.
    Yes, if the machine comes with Windows 8 (as mine did) then the disk will be formatted with a GPT partition table (instead of the old MBR partitioning scheme), and will boot using UEFI. If you are going to try to keep the existing Windows 8 system and add Linux then you will need to keep the disk with its GPT partition table and partition structure, but you can shrink the Windows 8 C: drive to make space for the Linux partitions that are needed ( a root partition and at least a /home and/or /opt partition and possibly a linux swap partition also ).  If you want to boot the Linux install via UEFI then you can simply add the required boot directory to the EFI System Partition (ESP).
    However it is very important that before trying to do any linux install that you switch off Fastboot from within WIndows 8 (or 8.1). Also most Linux distributions are in some difficulty booting using Secure Boot, though a few such a Ubuntu and Fedora are supposed to be able to do so. Hence it is much easier to work with Linux if Secure Boot is first switched off from the BIOS settings menu.
    The order of operations that I used was;
    1) Switch off Secure Boot from the BIOS - and boot back into the Windows 8 system to check that it boots OK.
    2) With Windows 8 running go into the settings and switch off Fastboot (which does a hybrid suspend when it shuts down instead of a full normal shutdown - if you don't do this then the memory gets overwritten when booting Linux in the future which means booting back into Windows will fail). 
    3) Reboot back into WIndows and check all is well, and if so then use the disk management facility within Windows 8 to shrink the C: drive to make room for the Linux partitions.
    4) Reboot to check Windows 8 still boots OK.  
    5) If you are going to update to Windows 8.1 then do so, and then update everything once it is booted (it is a huge update and takes ages!). Once done then you will likely have to update drivers for the graphics cards, the clickpad and possibly the wireless chip and ethernet chip. I found that I needed to get drivers that were newer than were available on the Lenovo website, by going to the relevant hardware manufacturer website (eg for synaptics for the clickpad). Then spent a week or so in the evenings getting Windows 8.1 configured the way I like it.
    6) Then I did a lot of reading about the various options for the boot manager that would suit a UEFI boot for a dual boot system for Windows 8.1 and Arch Linux and there was a choice of Grub, Gummiboot, rEFInd, and others - and after reading the details I decided on rEFInd as my boot manager which can boot not only any new Arch Linux install but automatically finds the Windows UEFI boot files and presents the options in a nice graphical window once the system gets past POST at bootup.
    7) It was important to check which partition was the ESP and to know what partitions I needed to create for the Arch Linux system.  Then I went ahead and booted from a usbkey to a uefi install system, and very carefully proceeded with a standard Arch Linux install, being particularly careful to know where to put the rEFInd boot manager files and the kernel and initrd files. Also I used efibootmgr to write the appropriate NVRAM boot entry in the motherboard memory so that the uefi boot system knows where to find the rEFInd uefi boot files in the ESP.
    8) Once complete the system boots to Arch Linux as the default, with a nice Windows icon which you can select with the arrow keys within the boot timeout period (default 20 seconds).
    I noted also that it is possible to create boot stanzas in the rEFInd boot manager config files which allow rEFInd to chain load other Linux systems or even other bootloaders if you wish - so it is very flexible. So if you want to you could install a grub standalone set of diretories/files so that if the normal linux boot fails then you can select the grub icon from rEFInd and chainload grub to boot either the same Archlinux install, or point to a third linux distribution if you have more partitions containing that third install which might be Ubuntu or Mint or ....
    Either way although getting to understand how uefi boot works is a learning curve it is actually generally simpler than the old legacy BIOS boot. With uefi you no longer need an MBR on the drive, and only a suitable EFI System Partition which has to be VFAT formatted. However if you want to have one of the linux distributions booting from legacy MBR then you need to create an MBR at the start of the drive - so you would need to move the start of the first partition and create a suitable sized Master Boot Record otherwise MBR boot can't work. If you do that then of course you have to be careful if the Windows partition is the one being re-sized that it doesn't mess up the Windows boot! However since using uefi to boot rEFInd allows a chainload to grub/gummiboot or other bootloaders then there should be no need to mess with MBR booting if you go down that route.
    If you are interested in rEFInd then the author Rod Smith has a good set of documentation that describe the details at http://www.rodsbooks.com/refind/
    He is also the author of a really excellent disk partitioner for GPT disks - http://www.rodsbooks.com/gdisk/
    Clearly it is necessary to read up on the boot facilities available for any linux distribution that you plan to put on the system.
    One nice thing is that uefi boot with an efistub supported kernel build is really fast on the Y510p. My system boots Arch linux in about 7 seconds to the KDE login prompt once the POST is complete and that only takes a couple of seconds.  Of course Windows is much slower once it is selected at the rEFInd screen and takes somewherearound 40 seconds or so to boot, but at least Linux is super fast!
    Anyway I hope that this helps.

  • Confusion about backing beans and multiple pages

    Hello:
    I'm new to JSF and am confused by several high level and/or design issues surrounding backing beans. The
    application I am working on (not unlike many apps out there, I suspect), is comprised of several forms which
    collect various query parameters and other pages that display the query results via dataTables. Additionally, the
    displayed results have hyper-text enabled content sprinkled throughout in order to allow further "drill-down" to
    greater details about a given result.
    Obviously, the initial forms are backed by a managed bean that has the getters/setters for each of the various
    fields. From a modularization perspective, I'd prefer to have the query execution in a separate class entirely,
    which I can accomplish via delegation or actionListeners. Where I get confused is in the case that one of the
    hyper-text enabled fields needs to invoke yet another query requiring specific values from the original result.
    In this case, how should I design the beans to store the pertinent selection information AND still be able to
    have the resulting 'details' query in a separate class.
    For example, let's assume that I have a form with a simple backing bean to collect initial query params. The user
    fills out the form and clicks on submit. The action field of the <h:commandButton> tag inside the <h:form>
    delegates to a method which performs the query and returns a navigation string to the results display page. The
    results page then uses a <h:dataTable> tag to display the results. However, through the process of displaying the
    results, it will create the hyper-text links as follows:
                        <h:form>
                            <h:commandLink action="viewDetails">
                                <h:inputHidden id="P_ID" value="#{results.id}"/>
                                <h:outputText value="Click here to see details"/>
                            </h:commandLink>
                        </h:form>Notice that the value="#{results.id}" is the key field (in some case I may need more than one) that the subsequent
    details page will need to perform the query. However, this value is stored in the results attribute of the original
    query backing bean. I do NOT want to add yet another level of queries into the same backing bean as I would like
    to keep the original and details query beans separated for maintenance and style reasons. How then, can I get this
    specific value ("#{results.id}") into another backing bean that could be accessed by the subsequent details page
    and query? In general, how does one map a single display element to multiple backing beans (or accomplish the
    effect of having one 'source' bean's value end up in another 'destination' bean's attr? If this is not possible,
    then isn't the details query/page forced to use the same backing bean in order to get the data it needs? If so,
    how can it determine which id was selected in this case?
    I'd appreciate any thoughts, ideas and suggestions here! Is there a better way to design the above in order to
    maintain the separation of logic/control? What is the 'best practice' approach for this sort of stuff?
    Thanks much in advance!

    This is what I have done in the pass
    <h:commandLink action="#{AppAction.getRecord}" >
    <h:outputText value="#{bbr.id}"/>
    <f:param name = "recordid" value ="#{bbr.id}" />
    </h:commandLink>
    within the getRecord method
    String key = "";
    FacesContext facesContext = FacesContext.getCurrentInstance();
         ServletContext servletContext = ServletContext)facesContext.getExternalContext().getContext();
    key = (String)facesContext.getExternalContext().getRequestParameterMap().get("recordid");
    then call you dbmethod passing in the key of the record the user wants to edit
    and then simply return a string to what page you want the user to go to
    I think this is what your looking for

  • Confused about polkit-gnome and lxpolkit [solved]

    I recently updated and new version of goffice and gnumeric came down with this warning:
    ==> The agent is no longer autostarted by default except in GNOME Flashback.
    For Xfce, LXDE etc., "lxpolkit" is the suggested lightweight alternative.
    See https://wiki.archlinux.org/index.php/Polkit#Authentication_agents for
    more details.
    I use xfce4 and do not know gnome-polkit installed at all. I removed it and pacman did not complain about dependencies. What exactly do I need lxpolkit for? I am able to use sudo without it.
    Last edited by maggie (2013-09-16 15:40:49)

    I'am confused too:
    Using lxde I was not even aware that there is an alternative to polkit-gnome, as it is always
    mentioned as quasi dependend on gvfs:
    The gvfs package needs to be installed, along with polkit-gnome for the polkit rules.
    This seems to be wrong as neither gvfs nor the polkit rules do depend on polkit-gnome authentication-agent in any way.
    Informations through the wiki(s) are very inconsistent!
    From LXDE-Page:
    polkit-gnome provides an authentication and will need to be started on login:
    $ mkdir -p ~/.config/autostart
    $ cp /etc/xdg/autostart/polkit-gnome-authentication-agent-1.desktop ~/.config/autostart
    What is it good for? Is there any reason to start it twice? The file mentioned is installed at '/usr/share/applications/polkit-gnome-authentication-agent-1.desktop' by default.
    Polkit-Page:
    To autostart it everywhere except in GNOME and KDE, copy the file /usr/share/applications/polkit-gnome-authentication-agent-1.desktop to /etc/xdg/autostart/
    LXDE-Wiki:
    gvfs and its dependencies (optional, but highly recommended):
    policykit-gnome (required for authentication for volume management)
    And last but not least the awareness that the LX alternative exists since March 28th, 2010 where PCMan himself told us polkit-gnome is not needed anymore:
    LXPOLKIT – SIMPLE POLICYKIT AUTHENTICATION AGENT:
    Generally when one needs to use policykit, he or she needs to install policykit-gnome. Now we have our own.
    A new component LXPolkit was added. It’s minimal policykit  authentication agent.
    As far as I get it, polkit-gnome only provides the 'query of password window' and can be replaced by lxpolkit which is more lightweight.
    And as it was never 'auto'started (see above) except in GNOME, the pacman.log message is IMHO highly questionable and confusing.

  • Confusion about BPM suite and BPA suite

    Hi,
    I'm very very new to Oracle BPM.
    I just checked the website and found Oracle provides two suites - BPM and BPA for business process management. I'm quite confused what's the difference of their usage? It looks to me that they are very similar to each other in functional.
    Is there any article that has already explained their difference?
    btw: if i want to choose a suite just for modeling, implementation and monitoring purpose, does that mean a BPM suite is more suitable?
    Thank you
    Lesley

    Hi Lesley,
    BPA is an robust process modeling tool used by business analysts to model both their processes and the entire enterprise. BPA has many different types of diagrams that let you see the different levels of decomposition and abstraction. As you'd exect, BPA has a robust process simulation capability. If you hit the BPA forum (Business Process Analysis Suite you'll get more information about it from experts in BPA.
    Oracle BPM was built and architected from the beginning as full life cycle Business Process Management (BPM) tool. It is similar to BPA in that it supports process modeling, documentation and simulation. I use the same diagram to explain the process to executives, managers, IT, SME and business analysts. As a business analyst, the tool is not complicated and can be learned in just a couple hours.
    Here's where the two differ. Oracle BPM was architected from the beginning as a full featured Business Process Management tool that implements and monitors business processes. Its capabilities include:
    <li> Logic - BPA is not a BPM tool so it is not intended to support the process logic needed for runtime. To support this, Oracle BPEL is used in conjunction with BPA. As a developer using Oracle BPM, I use templates and drag and drop to create much of my logic. Once I catalog an object it can be reused across multiple projects either by using the Project Dependency option or by importing the artifacts. I test my logic either by using the method editor debugger or at the process level. As I create objects used in my logic using Oracle BPM, I inherit attributes provided by introspected components. This means that if you have an ERP system object with 138 attributes, you do not have to rebuild this object from scratch in Oracle BPM.
    <li> Runtime / Execute the Business Processes - End users interact with the processes at runtime using an OOTB Workspace. End users are given various roles. When they log into the Workspace, the end users only see the work item instances that are in the roles that they have been assigned. There is an Engine that stores the work item instance information as it flows through the processes at runtime. The Engine is like a traffic cop, ensuring that the right work item instance goes to the right person in the right activity at the right time.
    <li> UI - The End user interface screens and complex end user interaction with a variety of screens can be built inside the Oracle BPM toolset's editor. The forms you build are automatically presented to end users in the Workspace.
    <li> Integration to Existing IT Assets - Oracle BPM can expose and consume IT components directly or through a service bus. Once consumed, the components can be used by any process needing to invoke them at runtime.
    Hope this helps,
    Dan

  • Confused about nonGUI beans and events

    The crux of this is the question "When will a NON GUI bean use events to communicate?"
    Articles I have read advocate that Beans are not just GUI components, and I agree. However I've only ever seen examples of GUI Beans!
    I am writing a non GUI Bean to represent a credit card.
    Before I got concerned with beans I would have just written the class with some accessor methods and some functional methods to do what I want, something like;
    void setCardNumber(String n)
    boolean isNumberValid() //check the card number using MOD10
    and then used that class in my application.
    But now I want to make it conform closer to JavaBeans. I know that beans have methods, but I am confused somewhat about how events figure into this. Do nonGUI beans fire events/listen for events? Very often?
    What I am asking is, what events should a Bean like CreditCard fire? The main function of this Bean is to determine if the number conforms to MOD10 checks using isNumberValid() method.
    Consider that I want to write another Bean, a GUI component that takes card details and updates the nonGUI CreditCard Bean. Of course I want loose coupling, not every user will use my CreditCard Bean (especially if I dont get these events straight!). If I use property change events to fill in the card number etc in the CreditCard Bean, should I make it (the CreditCard bean) listen for focusLost events from the interface to determine when all the details have been filled in and fire an event about the validity of the card number?? Doesn't this break the rules a bit, by providing prior knowledge to the CreditCard nonGUI Bean??
    Should the CreditCard nonGUI bean fire an event to say that the number is valid/invalid?
    You will probably say, "WHY use events, when you only need methods?" and that is my question! When would a non GUI bean fire or listen to events? I want my non GUI bean to be completely ignorant of GUI stuff, so no focusLost listener.
    Should the credit card interface fire a custom event (CardDetailsEnteredEvent) and the non GUI bean listen for these? But that's tighter coupling than just implementing the methods isn't it?
    I think that the nonGUI bean will have property change listeners for the card number etc, but this begs the question, what is the point of setter methods if I am bound the properties anyway??
    I've read everything pertinent at http://java.sun.com/products/javabeans/training.html
    Thanks to anyone who can stop my head hurting.
    Jim

    Ah, I think I may have figured it out from the spec.
    "Conceptually, events are a mechanism for propagating state change notifications between a
    source object and one or more target listener objects."
    key phrase being "state change" - my non GUI bean CreditCard only really has two states; 1. incomplete card details and 2. complete card details. So I guess I could/should only fire an event (DetailsEntered) when the details are completely 'filled in'.
    Which leaves me with the GU interface bean, it has two states also, 1. details being edited (focus gained) and 2. details finished being edited (focus lost) - should then this bean fire its own event (DetailsEdited) or should it just rely on other beans listening for the focus lost event? It's my functional interpretation that a focus lost event signifies that the details have been edited, perhaps I should therefore implement my own event that has this contextual meaning (DetailsEdited).
    I'm aware that the names of these events are quite poor, I always find good naming to be another very important head scratcher!!
    Jim

  • Confused about wiki pages and groups (i'm new to wiki services).

    we set up wiki services on our newly upgraded 10.5 server, but there is something that i can't seem to understand.
    it seems that the only way to create a new wiki is to create a new group. i don't quite get how you can have some members of one group and some members of another, access the same wiki page.
    this will get very messy for our company because we want to set up wikis for the multiple projects we have going. this means that the amount of groups will escalate rapidly and surely will cause problems and confusion.
    we currently have 7 groups created on the WGM and my company is about 50 employees.
    each group currently has their own wiki which leads me to my original problem/question.
    if we create a new wiki, that creates a new group, and we don't want that.
    i could totally be missing something here as i mentioned i am new to the wiki services and 10.5 server in general.
    if you need more information from me, let me know.
    thanks in advance.

    anyone?? bueller...? bueller...?
    i can't seem to find any information about this. i even looked here http://www.apple.com/server/macosx/resources/ in the wiki documentation and didn't find anything.
    perhaps i wasn't entirely clear in my initial post.
    say i want people in the accounting and marketing group to share a wiki page. the only way i know how to do this, is to create a new group (let's call it wiki_group) and to have members of both accounting and marketing be members of wiki_group.
    as i mentioned in my initial post, this is going to be a huge problem.
    the only work-around that i can think of is to have the accounting group and marketing group be members of both. this is not acceptable due to security and resource sharing.
    how can i create wikis without having to create a new group, and to have members of other groups view and edit other wikis?

  • Question about Messages application and AIM.

    Hello, I purchased my first apple device last year (iphone) at which point I created an iCloud account and received an @me email address. This is also my Apple ID. Later in the year Apple also gave me an @icloud email like everyone else. I am enjoying my iphone very much and this year I decide to purchase my first os x device (macbook) soon. This is what leaves me to my following questions:
    1. I wish to use the Messages application just for iMessage and not AIM or any other IM service. It is my understanding that about 2 years ago .mac and mobileme users were having AOL create Lifestream profiles for them without pemission. Now I have never been a .mac or mobileme user and only signed up icloud last year as previously stated. I just want to know if AOL is still creating these profiles and if they affect icloud users? I could not find any information about this issue being resolved. Would using the Messages applicaton just for imessage have the risk of having my icloud email have a Lifestream profile created for it?
    2. I am aware that when setting up my macbook for the first time it will ask to sign in to with your apple id and then into icloud. I plan to do this. I am told that this will make me signed into imessage automatically in the Messages application. What I want to know is if this will make me sign into AIM automaticlly too? I have no intention of ever using AIM so I hope this is not the case.
    I hope I have made myself clear as English is not my fist language. Any help with these questions is greatly appreciated. Thank you.

    Hi,
    The last bit first.
    iChat was the application used before Mountain Lion and Messages.
    It could only join the AIM, Jabber and in iChat 6 the Yahoo services.
    These are Instant Messaging services so the text sent between you and a Buddy tend to be referred to as IMs.
    The AIM site I linked to above has an Option (when you can log in there)  that allows your to turn this AIM feature Off.
    Some Jabber servers can do it but it tends to be an option the person running the Jabber decides to do or not.
    If a Server is set up to allow Off Line Messaging then those have to be stored.
    These can  a concern for the person running the server; either with storage space in the first place and for issues around governments want to "see" the data.
    A part of the iChat and Messages apps perform a "Listening" function when you turn your computer On but do not launch the App.
    This Listening will start up the App if anyone send you an iMessage  and AIM or Jabber message.
    However this function of Messages can be turned Off.
    This pic was created for something else but you can see the second line down is about setting the Status to Offline when the App is Quit.
    This effectively stops the App doing the Listening.
    Back to the Lifestream issue.
    I am not sure what triggers the Lifestream to be set up.
    I used a Search on the Lifestream site for my main AIM Name (which I can set the Lifestream to Privacy > No-one) and for my @me.com one which I cannot log in to the AIM settings site.
    I cannot see anything returned in either case.
    I also tried when Logged in under my main AIM name and got this form my @me.com ID
    I think that when AOL/AIM started Lifestream all those people that had AIM and AIM valid Screen Names were included in Lifestream.
    It may now be different and you may have to sign up for it.
    There seems to be a set up page.
    It would seem that my @me.com ID and @icloud.com ID have not been set up for Lifestream where as my AIM account (Main AIM Name as I call it) was set up in the beginning.
    I am afraid I don't have any clearer information than that on this part of your enquiry.
    8:03 pm      Friday; October 25, 2013
      iMac 2.5Ghz 5i 2011 (Mountain Lion 10.8.4)
     G4/1GhzDual MDD (Leopard 10.5.8)
     MacBookPro 2Gb (Snow Leopard 10.6.8)
     Mac OS X (10.6.8),
     Couple of iPhones and an iPad

  • Questions about message logging and topic TTL

    We're in the final stages of testing our application. Basically a producer application is sending small, non-persistent messages once every 10 seconds and once every second to an auto-created topic. The consumer application(s) listening are then doing data display of this 1 second and 10 second data.
    Testing the client we noticed it complained of a single 10 second message not being received. We verified that the message was sent by the producer. My question is whether there is a logging setting on the IMQ server that could be used to log the receipt and delivery of every message, perhaps by JMSMessageID to a particular topic? The only idea we could think of is to use 'imqcmd metrics' to track the number of messages in/out, but this isn't as useful for us.
    Any ideas you could provide would be greatly appreciated.

    imqbrokerd does not support tracing at this level, so metrics is probably the best approach for now.
    If a subscriber is not getting a message it is usually because of one of the following:
    1. The message is expiring. If you are using message expiration make sure the clocks are synchronized between the clients and the server. imqbrokerd will log a message when it has expired messages.
    2. The subscriber was down when the message was produced (and it is not a durable subscriber).
    3. You have set a size limit on the destination or system wide that is resulting in messages being shedded as specified by the limit behavior.
    With non-persistent messages you can get an additional level of reliability by setting the following ConnectionFactory property to true:
    ConnectionConfiguration.imqAckOnProducethis adds a handshake between the server and client when a message is sent and provides a mechanism for imqbrokerd to propogate additional message production errors back to the client.
    The following document at sunsolve.sun.com provides more details:
    ID70117 Sun Java[TM] System Message Queue: Tuning 3.0.1 and 3.5 For Robustness
    Joe
    http://wwws.sun.com/software/products/message_queue/

  • Confusion about BI Publisher and Dashboards

    Hi All,
    I am new in BI EE. I create a report in BI publisher and after that I want to publish that report in BI Dashboards...Here my confusion is start, I create repository to the required schema,Now I need to build report again in BI answer?? and then publish that report in Dashboard ????
    Please rectify my confusion I am very grateful to you...
    Regards

    Hello,
    You dont need to confuse. To build the report in BI Answers Report you need to develop repository. To build the reposity you must aware of Datawarehouse modeling so you can build the repository very well. You can save those BI Answers Report. Create a new dashboard and just drag this report to your Dashboard section to run your report in the dashboard.
    For BI Publisher its just a simple query reporting which not depend on any repository. You can create data source connection and build your own query or take help from BI Publisher query builder to built the query for you by selecting proper db objects and joins.

Maybe you are looking for

  • Best iCal alternatives?

    My wife and I depend on our calendar system heavily now more than ever, raising three young boys and running multiple businesses with complex schedules, plus iphone access. iCal was a dog to begin with, mostly due to bad synching problems. (I learned

  • My Macbook pro is making a strange noise... I think it is the drive? How do I fix it?

    The noise sounds as if I am burning a disc, or something is loading... but there is nothing in the disc drive, and I am not downloading anything.

  • Alert message 1611

    Could someone help me to exit from a recovery mode for my iphone? I've searched through many website and tried many ways but still can't have the problem fixed. An alert message 1611 comes out when I wanted to get an updated itunes. I tried re-bootin

  • Sitting Solo Trumpet or Trumpet Section in the Orchestra

    Hi, This is a multi-setup question. Anyone with Logic Pro can set things up exactly as I have them, for this question. The first part of the set-up is to locate the Apple Solo Trumpet and assign it to an audio instrument. So, after bringing up the EX

  • Why did apple remove the 3G toggle from the 4S ?

    Some people need this feature. Can we get it back ?