Chromium seems to interpret fontconfig rules differently

I have noticed that a number of fonts are not being antialiased when rendered on Chromium, producing ugly and often unreadable output.  The very same fonts render nicely under Firefox.  The following images should give a sense of what I am seeing:
Chromium:
Firefox:
I have the sharpfonts AUR package installed and it is the sharpfonts fontconfig rules which seem to be causing the issue.  Up to this point, the sharpfonts rules worked beautifully across my entire desktop.  The fact that I am now getting different font output from Chromium and Chromium alone would seem to suggest that it is somehow interpreting my fontconfig rules differently.  Can anyone shed some light on what is happening here?

And also, I get rule:
IF
C206_0=1 and C206_0=0 and C206_0=0
THEN
C206_3=0
Confidence=0.95249695
Support=0.023413174
But it is impossible for C206_0 to be 0 and 1 at the same time.
Chengliang

Similar Messages

  • Update rules, different source and target units (0BASE_UOM-- 0QVVMRA)

    Hi,
    I have a problem with update rules as I'm trying to map two keyfigures with different unit-types together. They are both kilograms (KG) but the source figure is defined as 0BASE_UOM and the target keyfigure is 0QVVMRA (Amount of sales).
    How can I make this connection work? Do I have to use the 'Unit calculation in routine' -option or is there a more simple way? I a routine must ne written, please post an example or link to a thread which has one.
    Thx!
    -miikka

    Hi,
    The Problem is that you are trying to connect a Key Fig for Type Quantity with the Key Figure Amount.
    The Amount Key Fig will have either 0CURRENCY or Fixed Currency.
    Using Update Rules Coding you could transfer the Value but you will have problems with the Currency. You cant store a Unit like KG in 0CURRENCY.
    Regards.

  • CF9 - cfinvoke seems to search scopes in different order

    We came across this in a legacy app when it was moved to CF9 -- a cfinvoke that had always worked in previous versions seemed to stop working whereas invoking the same method using function notation returned the right results.
    I've only tested the following on CF9 but it demonstrates the problem we had on CF9 which did not exist in previous versions.
    This uses cfinvoke and function notation to call the same method and they return two different results -- cfinvoke returns an emptry struct, the function notation returns a correctly populated struct.
    Application.cfm:
    <cfapplication name="cf9test"
        sessionmanagement="Yes"
        setclientcookies="Yes"
        sessiontimeout="#createTimeSpan(0,0,60,0)#"
        applicationtimeout="#createTimeSpan(1,0,0,0)#">
    test.cfc:
    <cfcomponent output="true">
        <cffunction name="func1" returntype="struct">
            <cfargument name="stObject" required="no" default="#structNew()#">
            <cfinvoke component="#this#"
                method="func3"
                returnVariable="stObject"><!--- Returns into Variables scope --->
            <!--- Returns the empty struct in arguments.stObject --->   
            <cfreturn stObject/>
        </cffunction>
        <cffunction name="func2" returntype="struct">
            <cfargument name="stObject" required="no" default="#structNew()#">
            <cfscript>
                stObject = func3(); // Returns into Arguments scope
            </cfscript>
            <cfreturn stObject/>
        </cffunction>
        <cffunction name="func3" returntype="struct">
            <cfscript>
                stTest = structnew();
                stTest.foo = "blah";
                stTest.blah = "bingo";
            </cfscript>
            <cfreturn stTest />
        </cffunction>
    </cfcomponent>
    index.cfm:
    <cfinvoke component="cf9test.test"
            method="func1"
            returnVariable="stObject">
    Return from the cfinvoke:        
    <cfdump var="#stObject#">
    <cfset structClear(stObject)>
    <br />
    <cfinvoke component="cf9test.test"
            method="func2"
            returnVariable="stObject">
    Return from the function       
    <cfdump var="#stObject#">

    I don't see the problem. All you had to do to avoid ambiguity was to var the variables in the methods and to give them distinct names. For example
    <cfcomponent output="false">
        <cffunction name="func1" returntype="struct" output="false">
            <cfargument name="stObject" required="no" default="#structNew()#">
            <cfset var object1 = structNew()>
            <cfinvoke component="#this#"
                method="func3"
                returnVariable="stObject1">
            <cfreturn stObject1/>
        </cffunction>
        <cffunction name="func2" returntype="struct" output="false">
            <cfargument name="stObject" required="no" default="#structNew()#">
            <cfscript>
                var object2 = structNew();
                stObject2 = func3();
            </cfscript>
            <cfreturn stObject2/>
        </cffunction>
        <cffunction name="func3" returntype="struct">
            <cfscript>
                var stTest = structnew();
                stTest.foo = "blah";
                stTest.blah = "bingo";
            </cfscript>  
            <cfreturn stTest />
        </cffunction>
    </cfcomponent>

  • BUG:  Saved Brightness/Contrast Actions Steps Interpreting "Use Legacy" Differently

    Photoshop CS6 is interpreting saved Brightness/Contrast action steps differently than CS5.
    The following two screen grabs were captured after double-clicking the exact same saved step in a saved action that removes banding noise. 
    Note that Photoshop CS5 has interpreted the step as having the [ ] Use Legacy value selected, while CS6 has not.
    This could be at the root of why some folks are reporting actions compatibility issues.
    CS5:
    CS6:
    -Noel

    Allowing this failure to remain in the release is going to mean I'm going to have customer issues with my commercial actions, many of which I have not had to change for 6 major version releases now. 
    It needs to be addressed!
    Other folks are getting acknowledgement and feedback from Adobe personnel, ''I have logged a bug'', etc. Could I please ask for the same courtesy?
    I can supply you with an action you can use to test with if you'd like.  Matter of fact, here's one.  Run it on Photoshop CS5 and CS6 to see a major difference in results.
    http://Noel.ProdigitalSoftware.com/ForumPosts/CS6_Bugs.zip
    -Noel

  • Premiere Interpreting Same Filetype Differently

    I'm working on a MACPRO 2.66 GHz 6-Core Intel Xeon with 16GB of RAM. I am working over a network via 10Gbe fiber connected to a main Storage Array and am working on CS6. I am creating cutdowns of weekly hourlong podcasts and then exporting them for the web. A description of my current workflow (simplified for purposes of troubleshooting)is:
    1. Ingest MP4 footage
    2. Create a sequence which requires nesting mulitple hour long clips
    3. Export that edited sequence as an mp4 compressed further for youtube.
    Prior to today the final encode has been taking 1.5hours. Today the encode is taking 5 hours. I have been checking to see what's different and the only thing that I have come up with is that one of the files of today's footage is getting imported as a quicktime movie while the last time around the footage was getting imported as an mpeg movie. When you select the properties of last week's and this week's file in premiere, this is in fact the case. Plus, the quicktime one is actually a "quicktime reference." There seems to be no way to get Premiere to import it a different way. What I don't understand is that both files are approximately the same size (give or take a few minutes) and when you pull up the info on the files in VLC or Quicktime both formats and settings are IDENTICAL. So the two questions I have are:
    1. Could the fact that premiere thinks one file is a quicktime be the reason why today's encode was 5 times longer?
    2. What could possibly be different about these two files that appear identical in format and wrapper?
    Thanks in advance for any insight.
    -Josh

    Josh,
    Welcome to the forum.
    This is but a wild guess on my part, but unless I mixed things up, it's the QT Reference, that is taking the longer to process. That guess is that PrPro is needing to "fill in the blanks," on the Reference File. Why one, nearly identical file would show MPEG, and the other QT Reference, is a mystery to me.
    Also, I'm a PC-guy, but do work with a lot of MOV files from outside contractors (mainly 3D animators), but I specify MOV Animation, and those work just fine.
    I hope that others will have something useful for you, beyond my guesses.
    Good luck,
    Hunt

  • When I drag a playlist from the list in the sources panel up to my iPod icon, the music is added immediately to my iPod. Are the rules different for this than for 'syncing' selected playlists to the same iPod?

    I am trying to put music from two different libraries onto my iPod.  In the first case, I can, but in the second, the program says it will erase all content and sync only the playlists Ihave checked.  What in general is the method for loading an iPod with music from two different libraries?

    Yes, if, you manually manage music and videos, iTunes syncs the content immediately. If you deselect “Manually manage music and videos,” the content you added manually is removed from iPod touch the next time iTunes syncs content. To transfer music from multiple computers, your device must be set to "Manually manage music," sometimes referred to as manual mode.
    In general, follow these instructions for Using iPad or iPod with multiple computers, http://support.apple.com/kb/HT1202

  • I've imported a Garage Band project but the instruments seem to be in a different key to the vocal. Any ideas why?

    I did a project in Garage Band but want to finish and mix it in Logic Pro. The vocals are in the right key but the instruments (guitars and bass) are in another key. I did the project originally in C, then changed it to D in Garage Band and recorded the vocal track after that. Any ideas why the instruments seem to stil be in the original key and how to fix this?

    Check/use the transpose section of the track/region inspector..... or Global transpostion... to set to correct/update the transposition value you want.
    http://help.apple.com/logicpro/mac/10/#lgcp912ee811
    http://help.apple.com/logicpro/mac/10/#lgcpf7c0d270

  • The latest version of iTunes seems to have merged 2 different libraries on my computer. I want them separate. What to do?

    I downloaded the most recent version of iTunes a few days ago. It seems to have merged my music library with my daughter's.
    These are two separate accounts. I don't want them merged. What do I do?

    From http://www.apple.com/iphone/specs.html
    System Requirements
    Apple ID (required for some features)
    Internet access5
    Syncing with iTunes on a Mac or PC requires:
    Mac: OS X v10.6.8 or later
    PC: Windows 7; Windows Vista; or Windows XP Home or Professional with Service Pack 3 or later
    iTunes 10.7 or later (free download from www.itunes.com/download)
    You can not run OS X 10.6 on a G4, or even a G5. It is Intel only.

  • Advice on dual-booting Windows 7 with UEFI motherboard

    I'm going to build a desktop PC tomorrow, having finally purchased all the parts for it. I'll be installing Arch as my main OS, and Windows for gaming. However I'm not really versed in UEFI and its uses, advantages/disadvantages; since my laptop just uses BIOS.
    My plan is to have 3 drives: 32GB SSD for the / partition, 1TB HDD for /home, and 500GB for Windows 7 x64 Ultimate.
    Being unused to UEFI I was thinking about trying to just run everything in BIOS/Legacy mode, but that doesn't seem very sensible to me, especially since I have the hardware so I might as well use it.
    So, reading the wiki and forums have led me to conclude that having a 1GB EFI System Partition on the SSD should be sufficient, and use gummiboot for my bootloader.
    Other reading about setting up dual boots suggests to me that installing Windows 7 on its own HDD with MBR partitioning and Arch on a separate (set of) drive(s) with GPT partitioning will be sufficient. The reason being that if the BIOS is set up to boot sda, which has GRUB as its bootloader, using GRUB I can choose to boot into Windows despite it being on a separate hard drive.
    My questions are (and it occurs to me that I am in the most part just looking to have my ideas confirmed):
    1. Have I gotten this all completely wrong?
    2. If I'm correct, can the above system of using GRUB on one drive to boot up an OS on another drive be applied to UEFI?
    3. Has anybody tried/succeeded/failed to dual-boot in this fashion before me, and if so what did they do?
    Thanks one and all! Hopefully I've made myself clear enough here

    billodwyer wrote:Being unused to UEFI I was thinking about trying to just run everything in BIOS/Legacy mode, but that doesn't seem very sensible to me, especially since I have the hardware so I might as well use it.
    Using BIOS/CSM/legacy mode can work fine; however, it will probably slow down the boot process by a few seconds, and it will close off some possible future (and even current) advantages, as EFI support in Linux is improved.
    So, reading the wiki and forums have led me to conclude that having a 1GB EFI System Partition on the SSD should be sufficient, and use gummiboot for my bootloader.
    A 1GB ESP is more than sufficient. In terms of space requirements, 100-500MB is enough, depending on how you use the ESP; but various bugs and default settings make me recommend 550MiB as a good size. Bigger is OK, but wastes some disk space.
    A bigger issue is that the ESP won't really benefit much from being on your SSD, since it's read once at boot time. The biggest advantage to putting the ESP on the SSD in your setup is that if you use gummiboot, you'll also have to put the Linux kernel and initrd file on the ESP, so having them on an SSD will speed up the boot process by about 1-5 seconds. Overall, I'd probably put the ESP on one of the spinning disks.
    One more comment: gummiboot can launch boot loaders from its own partition but not from other partitions. This can work fine if you plan things carefully, but with three disks and two OSes, you must be absolutely positive that Windows uses the ESP on which gummiboot is installed. I'm not an expert on Windows installation, so I can't offer any specific pointers or caveats on this. If you need something with more flexibility, both rEFInd and GRUB can redirect the boot process to other partitions or physical disks. rEFInd can also redirect from an EFI-mode boot to a BIOS/CSM/legacy-mode boot. (See below.) Overall, rEFInd's flexibility on this score is a plus compared to gummiboot; but gummiboot is covered in the Arch wiki's beginner's guide, which is a plus. You'll have to pick which advantage you prefer. (Note that I'm rEFInd's maintainer, so I'm not unbiased.)
    Other reading about setting up dual boots suggests to me that installing Windows 7 on its own HDD with MBR partitioning and Arch on a separate (set of) drive(s) with GPT partitioning will be sufficient. The reason being that if the BIOS is set up to boot sda, which has GRUB as its bootloader, using GRUB I can choose to boot into Windows despite it being on a separate hard drive.
    This is an unworkable idea, at least as stated and if you want to do an EFI-mode boot. Windows ties the partition table type to the boot mode: Windows boots from MBR disks only in BIOS mode, and from GPT disks only in EFI mode. Thus, using MBR for the Windows disk will require a BIOS/CSM/legacy-mode installation of Windows. Furthermore, neither gummiboot nor GRUB can redirect from EFI mode to BIOS mode (or vice-versa), so if you do it this way, you'll be forcing yourself to boot Linux in BIOS mode, to switch between BIOS-mode and EFI-mode boots at the firmware level (which isn't always easily controlled), or to use rEFInd to redirect from an EFI-mode boot to a BIOS-mode Windows boot.
    Overall, you're best off either using GPT for all your disks and booting all your OSes in EFI mode or using MBR for Windows (and perhaps all your disks) and using BIOS-mode booting for all your OSes.
    Under EFI, the boot process is controlled by settings in the NVRAM, which you can adjust with "efibootmgr" in Linux, "bcfg" in an EFI shell, or "bcdedit" in Windows. (The Arch wiki covers the basics at least efibootmgr and bcfg.) In a typical dual-boot setup, you tell the computer to launch your preferred boot manager (EFI-mode GRUB, rEFInd, or gummiboot, most commonly), which then controls the boot process. You set up boot loaders for all your OSes on one or more ESPs. (Note: A boot manager lets you choose which boot loader to run, and a boot loader loads the kernel into memory. GRUB is both a boot manager and a boot loader. rEFInd and gummiboot are both boot managers. The EFI stub loader, ELILO, and the EFI version of SYSLINUX are all boot loaders but not boot managers. Most EFIs include their own boot manager, but it's usually primitive and awkward to use. It's also not standardized, so my computer's built-in boot manager is likely to be different from yours. Thus, I recommend against relying on the built-in boot manager for anything but launching your preferred boot manager.) Thus, the lowest-common-denominator type of setup is to put your preferred boot manager, the Windows boot loader, and a Linux boot loader (which could mean your Linux kernel) on a single ESP. If you want to use multiple ESPs or otherwise split things up, you cannot use gummiboot as the boot manager, since it can't redirect the boot process from one partition to another. (Many EFIs can do this with their own built-in boot managers, but this isn't guaranteed, and it's usually more awkward than using rEFInd or GRUB.)
    I know this can be a lot to absorb. The official rules aren't really all that complex, but different EFIs interpret the rules differently, and the different capabilities of the various boot managers and boot loaders creates a lot of subtle implications for how you set everything up.
    1. Have I gotten this all completely wrong?
    Significant parts of it, I'm afraid; see above. You're working under BIOS assumptions, which don't apply to EFI.
    2. If I'm correct, can the above system of using GRUB on one drive to boot up an OS on another drive be applied to UEFI?
    GRUB can do this, but gummiboot can't. You set one of those (or something else, like rEFInd) as your primary boot manager. Using both GRUB and gummiboot adds unnecessary complexity, IMHO. OTOH, setting up multiple boot managers or boot loaders is possible, and can give you a fallback in case one fails. For instance, there's a known bug that affects 3.7 and later kernels, mostly on Lenovo computers, that causes the EFI stub loader to fail sometimes. Thus, if you use rEFInd, gummiboot, or the EFI's own boot manager to launch the kernel via the EFI stub loader, having GRUB, ELILO, or SYSLINUX set up as a fallback can provide helpful insurance in case a kernel upgrade causes your normal boot process to fail.
    3. Has anybody tried/succeeded/failed to dual-boot in this fashion before me, and if so what did they do?
    Many people dual-boot Windows and Linux under EFI. There are a huge number of possible solutions. My own Windows/Linux dual-boot system uses:
    rEFInd
    rEFInd's EFI filesystem drivers
    Linux kernels on Linux-native /boot partitions (two partitions, one for each of the two distributions installed on that computer)
    The Windows boot loader on the ESP
    This works well for me, but it wouldn't work with gummiboot instead of rEFInd, since gummiboot can't redirect the boot process to another partition. (gummiboot also can't automatically load filesystem drivers.) Arch Linux users who use gummiboot often mount the ESP at /boot, which enables gummiboot to easily launch the Linux kernel. Doing this with multiple Linux distributions would be awkward, though, since you'd end up with two distributions' kernels in the same directory.

  • Changing duration after applying motion properties?

    I have a series of still images that I applied scaling and motion to and I now want to change their duration so I right-click, choose Speed/Duration and change the timecode for the duration to make the still image take less time on screen. After doing this the scaling and motion no longer work. Is there a way to change still image duration after applying scaling and motion to them?
    Thanks.

    Looking for some answers (Oh I realise this thread is 1 year old!) about clips duration I bumped into this one. I wonder if part of the above problem had something to do with pictures coming from different cameras - I seemed to notice that with the same preference choice some clips seem to interpret the duration differently? Probably when the aspect ratio is different from one camera to another, or if one has changed his or her aspect ratio on that camera in a shooting session.
    I do remember that to change the duration default via preference it has to be done before importing photos in the project windows or has someone pointed out, re import them after changing the default.,
      I also thought after changing the duration of clips that had key frame for various functions, the new duration may have excluded those key frames because the shortening had truncated the part where the end key frames were,  thus making them void.
    (I'm a little surprised to be in this thread as I had written in the "question" field "still images duration" and was in another thread with 5 entries, the last one from Anne Benz with a shot of a time line, also from Oct 2012! But some how when I tried to add to it, I could not and found for unknown reason I was not signed on! But now I come back signed on with the same question I'm in the different thread?)
    Perhaps the following may be interesting and helpful for some. Changing duration of still in the time line can be quite useful when one needs to match the Video sequence with its added Audio.   As you have a display of stills with transitions, animation etc. and you add some music to it. If they don't quite finish at the desire point. (usually with the end of the music, song). You can fine tune it (like within a few seconds) by shortening or lengthening the clip, clips duration. And if it's ok with what  you have you can select the whole (video)  time line  and after deciding how much is the difference between your portion of music and your clips apply that time. Sometimes using only a few clips is enough other times the whole of it works better.  I find it's ok only for a few seconds though as there are a couple of problem with this, The key frames may need to be readjusted and or even be out of action (as explained earlier) In that case it's probably better to have the video clips on a sequence of their own then add the music on a new sequence and match them this way on the same time line.
    PS: Titles and anything relying in proper timing placement above this video display timeline needs to be checked, perhaps included or adjusted with clip or clips duration change.
    I really was looking at ways of creating a sequence with a variety of animations that I could re used for other project with the "Copy" then "Paste Attributes" in the hope it would be quicker than going back to the "Contor Effect" function all the time. I remember using (MS Movie Maker) and it had a few icons for this purpose that one would apply for a clip. It was rigid but quite effective. This way even if I needed to adjust the key frames, they would already be in place. (I already use this technic with similar pics, I copy one animation I like and "paste attributes to  a bunch of pics, sometimes reverse the key frames, "copy" then select alternate pics from the previous selection and paste the new attributes to them... May be not everyone's taste!
      So if I could import such a sequence in my Projects and place it on Video 2  with each pic. labelled with its appropriate animation and obviously key-framed accordingly. (I would have created a photo folder and renamed each of its photos with one of the desired labels, ie Pan down, Pan left, Zoom in, Rotate %,  etc) - But I thought I hit a snag with the duration....
    I have actually gave this a quick try and it seems to have some value. I just found out that it's worthwhile to get the shortest but meaningful labeling for this "templates" in order to identify them easily in their smallest view (time line zoomed out) ie Pan Left  = PL, PR, PU, Zi, Zo etc.

  • SQL Developer 1.51.5440 Font Settings

    Hi,
    In my other editors & dev tools, I set the font to Lucida Console 11. It looks OK. However, SQL Developer 1.51.5440 seems to interpret font seting differently:
    - Lucida Console size 14 looks like size 10 in other editors.
    - ClearType is not respected (look pixelated)
    Also is it possible to set the font for the datagrid? (the one that display columns, constraints, data, etc. when clicking on a table).
    Thanks in advance for any help.

    Changing the font in the grid is currently only possible through Look&Feel themes.
    Nevertheless, there's a feature request at http://apex.oracle.com/pls/otn/f?p=42626:39:482060356626746::NO::P39_ID:7001 (Raghu, a developer, promised to check the status).
    The theme's fontsize can be changed in the IDE.Properties file (under your user profile). If you're lucky, your problem with the size in the other areas might be affected by this also.
    As for Cleartype, I understand you need a JDK 1.6. Sqldev comes with a bundled 1.5, but you can download the latest non-beta and set its use in sqldeverloper.conf (under the installation folder):
    SetJavaHome C:\Archivos de programa\Java\jdk1.6.0_07Hope that helps,
    K.

  • GTL Problem Checking existance of variables

    Hi,
    I'm using PD 15.3 on a win7 System.
    In a pdm targeted to Oracle DBMS. Enhancing the generation of triggers I'm trying to request a variable not sure if it exists.
    I use:
    .set_value(v1,,new)
    %v1?% shows: "true"
    %v2?% Shows "true" while "false" is expected since v2 is not defined.
    Trigger source will be generated from templates which are defined within an XEM model Extension. Under metaclass trigger I defined a "Generated File" which containes the same code snippet as above. The generated file shows up when previewing the trigger and there coding works as expected:
    %v1?% shows "true"
    %v2?% shows "false"
    GTL seems to Interpret question marks different depending on use within a DBMS-Extension or use within XEM-Extension.
    Can anybody help?

    Hi,
    The is around evaluation of trigger text. This does not use the same mechanisme as the GTL.
    This should be considered as a bug.
    A workaround is to define under trigger profile a new template "isDefined" with the following (which use the bug behavior, so once the bug is fixed, the following should not work anymore)
    <<
    .set_value(percent, "%", new)
    .set_value(varname, "%percent%%@1%%percent%", new)
    .if(%varname%==%*@1%)
    false
    .else
    true
    .endif
    >>
    What is used here is that the variable evaluation returns the variable name when it is not defined.
    To check this, I rebuild the variable name with percent around it and compare with the variable evaluation (%*@1%)
    In your trigger template, you replace the "%v1?% with %isDefined(v1)% and it should returns the correct value.
    Marc

  • HI there, am having a big problem updating mu iphone due to me forgetting my password, can you tell me how to retrieve it. It seems like i have one apple id for downloads  and a different one for updates. Can you help ps.?

    Hi There
    Am having a big problem updating my iphone 4 Apps, due to a problem with my password for my apple id. But it seems that i have two different ids, one for the updates and another for the downloads. I am not able to retrieve my Id password for updating my Apps. Can you help me ps?
    Thank you

    What program did you use to fix the internal hard drive? What repairs did it report making? More crucially, if repairs were reported, did you re-run the utility until it reported no problems found? It's possible for one set of problems with a drive to mask others.
    If you have about 5GB or so free space on the internal hard drive, you can use your OS X Install disc to perform an Archive and Install to recover from the failed Software Update. When the computer restarts, download and run the OS X Update Combo 10.4.11 (Universal). When the Mac restarts after that, run Software Update. During all this, make sure nothing interrupts or shuts down the Mac! Note that the first start after each of these updates will take significantly longer than subsequent starts, so be very patient.
    How did you back up your internal drive to the external one? Did you just drag things over in the Finder, or did you use a utility such as SuperDuper! or CarbonCopyCloner? Does the external HD show up as a bootable volume in System Preferences > Startup Disk, or using Startup Manager?

  • Office 2013 - OSD - Specify different config xml

    Hi Guys
    I want to deploy Office 2013 in different languages to different users. It seems there are a few ways to do this. But I am thinking I want to package all languages I need into one Office install package and then in the task sequence have groups for countries.
    Each country has a variable to specify which config.xml to use. Is this possible or is there a better way?
    (I just need to know how to point to the correct xml in the package, kind of like the unattend file) I know the rest if this is the way.
    ie.
    -France
    ---ConfigFR.xml
    -Germany
    ---ConfigGe.xml
    Sweden
    ---ConfigSE.xml
    Thanks
    NN

    Hi
    I know the original question has been answered, but I have a follow up question.
    I am deploying Windows 8.1 Update using custom variables to install Windows language packs and Office Language packs dynamically based on rules. Running just one language works fine. But if I have a multiple language package deployed to Windows the
    Office DTs do not pick this up (each DT is currently working of one language per DT and using operating system language to install the correct xml).
    See here:
    These are the DTs
    After doing a bit more reading. I now know I need to setup the rules differently. Can I set the rules to be English and Swedish install DT1, and English and Romanian install DT 2, English alone install DT 3 etc?
    I basically need an "AND" rule.
    thanks again
    NN
    *you may have to zoom to see the larger image.

  • Where to check/enable for log keeping track of transport rule actions?

    I have implemented some transport rules to "journal" all emails from specific clients as per this
    thread. 
    So there are 4 transport rules to capture all those email:
    1. email from Clients (incoming / FROM)
    1.1 from users outside the organization.
    1.2 sent to member of AD Group
    1.3 sent to users inside the organization.
    1.4 where the from address contains "domain of our clients list"
    1.5 BBC to capture mailbox
    2. email to Clients (outgoing/ TO)
    2.1 from member of AD Group
    2.2 from users inside the org
    2.3 sent to users outside the organization.
    2.4 where the to address contains "domain of our clients list"
    2.5 BBC to capture mailbox
    3. email to Clients (outgoing/ CC)
    3.1 from member of AD Group
    3.2 from users inside the org
    3.3 sent to users outside the organization.
    3.4 where the cc address contains "domain of our clients list"
    3.5 BBC to capture mailbox
    4. email to Clients (outgoing/ BCC)
    4.1 from member of AD Group
    4.2 from users inside the org
    4.3 sent to users outside the organization.
    4.4 where the bcc address contains "domain of our clients list"
    4.5 BBC to capture mailbox
    The symptoms are that while I am seeing by selecting random emails that everything seem to run fine (rule filtering from transport does get incoming and outgoing messages to that “capture” mailbox) and I tested this fine with some test emails
    in different domains.
    Somehow I am no getting the results I want. With business sending some test sets I should be finding in that mailbox, I do not find everything. Some of the email that apparently would logically be captured are not. Is business lying about the test sets they
    send? I don’t think so and the fact is that I seem to be missing emails.
    Anyhow my questions to you are the following:
    1.    Do you know of any logging done by the transport server to check on matches of the filters?
    2.    I am using outside and inside condition in the rules. Are they what I think they are?
    I hope you can help. I think I am doing this right, but I cannot verify the process 100%. Some logs or additional information would help. Or perhaps I am not using the conditions properly.
    Thank you in advance.
    and BTW the environment is Exchange 2007

    Based on my research, there is no specific log to match the filters. During the mail flow, only SMTP log and Message Tracking log can record the message information.
    You can check the two logs if needed. For more information, please refer to the following steps.
    Enable Message tracking log
    1. Open the Exchange Management Console. 
    2. In the console tree, expand Server Configuration, and select Hub Transport.
    3. In the action pane, click the Properties link that is directly under the server name.
    4. In the Properties page, click the Log Settings tab.
    5. In the Message tracking log section, Select Enable message tracking log to enable message tracking.
    6. Click Apply to save changes and remain in the Properties page, or click OK to save changes and exit the Properties page.
    Enable SMTP Log
    1. In the console tree, expand Organization Configuration, and select Hub Transport.
    2. In the action pane, click on Sender Connectors and right click on send connector and then click on properties.
    3. Select “Verbose” under “Protocol logging level” and then click ok.
    Then, you can find the logs from the following location.
    Collect Message Tracking Log
    On the Exchange server, go to directory “c:\program files\Microsoft\exchange server\TransportRoles\Logs\Message Tracking”
    Collect SMTP log
    Open the folder on the Hub Server,: C:\Program Files\Microsoft\Exchange Server\TransportRoles\Logs\ProtocolLog\SmtpSend.
    Thanks.
    Novak
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

Maybe you are looking for

  • Do you allow query or view creation in QA and PRD?

    Do you allow queries, views and workbooks to be created in QA or PRD?  If so, do you transport those BEx Objects into the other systems (e.g. PRD->QA and PRD->DEV or QA->PRD and QA->DEV)? Often I find I will be creating a query or a view in QA or PRD

  • How to align items in a form page with respect to regions

    I have 20 item fields in same form. i need to align them in sequential manner. For example x    y    z   a b c l   m  o p   q   r   s  xyzabclmopqrs    are the items to achieve this what i have to do.... Regards, Rani

  • I downloaded Mavericks, and now ITunes won't open

    Although the iTunes icon is showing on the dock and in Launchpad and Finder, it won't open.

  • Urgent:  rename folder in forms 6i

    Could someone please tell me -- In forms 6i ,how we can rename a folder which is on a server TIA,

  • Sprint Nav after 1.3.1 update

    I have a bone stock Pre which was purchased on day one of availability.I am having an issue with Sprint Nav where it will randomly truncate / drop all or part of a street name while giving voice directions. I have used this app daily and this was nev