Typekit vs Edge Fonts -- which is better? Best practices?

I'm not sure I understand the difference between the Edge Web fonts and having a Typekit. In this case I'm using Source Sans Pro, which is available under both.
In my first attempt at generating a Reflow project, I had the text in Photoshop set in Source Sans Pro, with a copy of the font active on my Mac (via FontXplorer). When I did my first 'Generate' test the text came through without the font, just a 'Browser Default.' But I do believe that Source Sans was available as a choice in the Syling tab.
For my second attempt, I deactivated the local font and instead turned it on in my Typekit* under Creative Cloud, and reopend the Photoshop doc with that version (the layers had to Update, and all was well). This time when I generated the Reflow project, the type came through as Source Sans Pro.
So, which is the best way to use these fonts in Reflow? I do notice that the CSS only says "font: source sans pro", which means I'm still going to have to manually add the specific font codes to my HTML and CSS by hand, correct?
*I saw in a demo video (which I can't find now, otherwise I would link it) where a guy who generated a Reflow project from Photoshop that contained Typekit fonts, that Reflow would ask you to enter the Kit ID in a pop-up window, then re-select your fonts to match what you originally chose in Photoshop. In my second attempt I was using the Typekit version of the font but I was not prompted for the Kit ID like this upon opening the Reflow doc. Has this feature been changed or removed since then? I was able to enter the Kit ID in the 'Custom' tab when I chose 'Manage fonts', but then I had duplicates listed in my font menu until I deactivated the Edge version.
Sorry for the long post -- just wondering which way is best since Edge Fonts and Typekit seem to have redundant functionality!
JVK

First there shouldn't be a difference between the two. I think the only suggestion is to try and not use both. You can and it is supported, but results in more http requests because all your selected edge web fonts are loaded in one file and your selected typekit fonts are loaded in another and those can't get combined into one single file.
Also, if you are just syncing your fonts using the Creative Cloud to get them on your desktop but not adding they to a typekit "Kit" and using your kit ID in the dialog, or not seeing the dialog at all, then the Edge Web Fonts were selected for you automatically.
I think the reason Source Sans Pro didn't work the first time Reflow hadn't finished downloading the full font list from the servers. This list is cached locally so the next time you use that font we'd find it in the list and select it for you. If Relow finds matches for all the fonts you used it won't popup the font picker dialog. If that dialog does pop and that list doesn't have your Edge Web Font available you can add it to the list by selecting Manage Fonts from that menu.
Hope that helps and thanks for useing Refow. Let us know how you like the Photoshop import and any other things we can do to improve it.

Similar Messages

  • Which wayis the best practice ?

    Hi ,
    I am doubt of which way is the best practice to initialize instance component in a Panel.
    ie .is there any diference, in terms of performance , between initializing component inside the constructor or just outside of constructor?
    1)
    public class MyPanel extends JPanel
    private static JScrollPane scrollPane = new JScrollPane();
    public CustomerTaskPane()
    2)
    public class MyPanel extends JPanel
    private static JScrollPane scrollPane = null;
    public CustomerTaskPane()
    scrollPane = new JScrollPane()
    }

    Correction for the above reply.
    *btw can i avoid from declaring the static field of a class itself when i need to implement the class which follows Singleton pattern like.
    public class CustomerMainPanel extends JPanel
    public static CustomerMainPanel customerMainPanel = null;
    public static synchronized CustomerMainPanel getInstance()
    if( customerMainPanel == null )
    customerMainPanel = new CustomerMainPanel();
    return customerMainPanel;
    }

  • Which is the best practice

    hi all,
    i have some columns
    select org_port,org_pname,dis_port,dis_pname,fin_port,fin_pname
    from table
    where conditioni have to use these same columns in two three places.
    shall i use these columns in multiple columns or i have to select these columns in the select statement by giving alias names and use where ever we want.
    which is better and which one is good practice?
    please let me know.
    Edited by: user13329002 on Aug 30, 2010 6:34 AM

    Create a procedure which has the result-columns from your query as OUT-parameters. Then call the procedure from the different places you need to.

  • Which are the best practices with mail for machos and any email client for PC

    I got some mac`s at the office, and I want make a best practices manual for my users, like, please open the clip to attach the file intead of drag and drop, beacuse the pc users see the photo embedded into the message body.
    could someone helpme?

    Why not print out Mail's Help files?

  • Which is the best practice of creating distribution channels and divisions

    actually my company has 11 plant and every plant has differnt type of finished goods
    the sales is gone on order ,its exported and its given to distributers
    the goods are totally based on order and one product is dedicatedly going to one customer
    so in this way what should be the stretegy to creat division and distribution channel.

    hi,
    just to add, since the type of products if they are totally different in nature  better you create that many divisions. But a customer might be buying all of those are some of those. So, use common divisions concept. You might as well need to use the common distribution channel since you have exports and tomorrow see a situation for domestic sale. There can be different type of customers in domestic.
    so having few distribution channels helps. But, this depends on the project scope.
    but basically you are doing only export. Can go ahead with one distribution channel.
    But remember building up enterprise is the most critical thing and has to be done after a great deal of analysis keeping in view the project scope.
    Hope it helps.
    Thanks
    Sadhu kishore

  • Auto update turned ON/OFF which is the best practice?

    What are the pros and cons of having Auto-Updates turned on in Firefox

    Information about autoupdate in general is here:
    https://support.mozilla.org/en-US/kb/update-firefox-latest-version?esab=a&s=auto+update&r=5&as=s
    The main pros is your browser will always be as secure as we can make it, you'll get new user features and also help move the web forward by ensuring that web developers can start taking advantage of new open, standard web features.
    There are few cons, especially with silent/background updates in place. The main one would be if you require a very old plugin that is not being updated, however, realize you are putting your computer at risk if you choose to do this.

  • DNS best practice in local domain network of Windows 2012?

    Hello.
    We have a small local domain network in our office. Which one is the best practice for the DNS: to setup a DNS in our network forwarding to public DNSs or directly using public DNS in all computers including
    server?
    Thanks.
    Selim

    Hi Selim,
    Definately the first option  "setup a DNS in our network forwarding to public DNSs " and all computers including server has local DNS configured
    Even better best practice would be, this local DNS points to a standalone DNS server in DMZone which queries the public DNS.
    Using a centralized DNS utilizes the DNS cache to answer similar queries, resulting in faster response time, less internet usage for repeated queries.
    Also an additional DNS layer helps protect your internal DNS data from attackers out in the internet.
    Using internal DNS on all the computer will also help you host intranet websites and accessibility to them directly. Moreover when you are on a AD domain, you need to have the computers DNS configured properly for AD authentication to happen.
    Regards,
    Satyajit
    Please “Vote As Helpful”
    if you find my contribution useful or “Mark As Answer” if it does answer your question. That will encourage me - and others - to take time out to help you.

  • Best practice recommendation--BC set

    Dear friends,
       I am using BC set concept to capture my configurations. I am a PP Consultant.
    Let us consider, one scenario, Configure plant parameters in t.code OPPQ.
    My requirement is:
    A.   Define Floats: (Schedule Margin key)
    SM key: 001
    opening period: 1day
    Float before prod: 2day
    Float After prod: 1 day
    Release period: 1 day
    B.   Number range
    Manitain internal number range as: 10-from:010000000999999999. (for planned orders)
    This is my configuration requirement.
    Method M1:
    Name of the BC set: ZBC_MRP1
    while creating BC set first time, while defining floats, i have wrongly captured/ activated the opening periodas 100, instead of 001. But i have correctly captured the value for number range (for my planned orders)
    Now if u see the activation log for my BC set, my BC set is in "GREEN" light--Version1, successfully activated, but activated values are wrong)
    So, i want to change my BC set values. Now i want to reactivate my BC set with correct value. Now i am again activating the same BC set with corret value of opening period (Value as 001 ). After reactivating the BC set, if i get into my BC set activation log, one more version (version 2) has appeared with "GREEN" light.
    So in my activation log, two BC sets are visible.
    If i activate Version 1---wrong values will be updated in configuration
    If i activate Version 2---corrrect values will be activated in configurations
    But both versions can be activated at any point of time. The latest activated version will be alwyas in top.
    <b>So method 1 (M1) talks about, with one BC set name, maintain different versions of BC set.</b>...Based on your requirement activate the versions
    Method 2 (M2)
    Instead of creating versions within a same BC set, create one more BC set to capture new values.
    So if i activate second BC set, the configuration will be updated.
    Please suggest me, which method is best practice( M1 or M2)?
    Thanks
    Senthil

    I am familiar with resource bundles, but wonder if there is a better approach within
    JDeveloper. Resourcebundles are the java-native way of handling locale-specific texts.
    Are there any plans to enhance this area in 9.0.3? For BC4J, in 9.0.3, all control-hints and custom-validation messages (new feature) are generated in resource-bundles rather than xml-files to make it easier to "extend" for multiple locales.

  • "Best Practices" for using different Authentication Schemes ?

    Hi
    We are using different authentication schemes in different environments (Dev/QA/Prod). Changing the authentication scheme between the environments is currently a manual step during the installation. I am wondering if there are better "Best Practices" to follow, where the scheme is set programmatically as part of the build/ load process for a specific environment. ... or any other ideas.
    We refrained from merging the authentication schemes (which is possible) for the following reasons:
    - the authentication code becomes unnecessary complex
    - some functions required in some environments are not available in all environments (LDAP integration through centrally predefined APIs), requiring dynamic execution
    Any suggestions / experience / recommendation to share are appreciated.
    Regards,
    - Thomas
    [On Apex 4.1.0]

    t-o-b wrote:
    Thanks Vikram ... I stumbled over this post, I was more interested in what the "Work Around" / "Best Practices" given these restrictions.
    So I take it that:
    * load & change; or
    * maintain multiple exports
    seem to be the only viable options
    ... in addition to the one referred to in my questions.
    Best,
    - ThomasThomas,
    Its up-to you really and depends on many criteria +(i think its more of release process and version controlling)+.
    I haven't come across a similar scenario before..but I would maintain multiple exports so that the installation can be automated (no manual intervention required).
    Once the API is published +(god knows when it will be)+ you can just maintain one export with an extra script to call the API.
    I guess you can do the same thing with the load & change approach but I would recommend avoiding manual intervention.
    Cheers,
    Vikram

  • Best practices for data representation

    I'm curious about the best data representation for a constant or variable when there is an obvious choice of two.
    For example, take the Timeout terminal of the Event structure. This terminal takes a Long (I32) data type, but I'm wiring to it a constant value of 100 and therefore could use an Unsigned Byte (U8). Setting the constant to be I32 prevents an automatic conversion step from happening, but setting it to be U8 saves a little bit of unnecessary allocated space.
    Which is better?

    Practically
    speaking it more than likely will not matter until the data sets get
    large however as a "best practices" go it is best to keep the data
    consistent and in the type that the control, property node etc expects. Directly from the NI user manual (LV 7.1)
    "Coercion
    dots appear on block diagram nodes to alert you that you wired two
    different numeric data types together. The dot means that LabVIEW
    converted the value passed into the node to a different representation.
    Coercion dots can cause a VI to use more memory and increase its run
    time. Try to keep data types consistent in VIs."
    Cheers,
    --Russ

  • Best-practice for Catalog Views ? :|

    Hello community,
    A best practice question:
    The situtation: I have several product categories (110), several items in those categories (4000) and 300 end-users.    I would like to know which is the best practice for segment the catalog.   I mean, some users should only see categories 10,20 & 30.  Other users only category 80, etc.    The problem is how can I implement this ?
    My first idea is:
    1. Create 110 Procurement Catalogs (1 for every prod.category).   Each catalog should contain only its product category.
    2. Assign in my Org Model, in a user-level all the "catalogs" that the user should access.
    Do you have any idea in order to improve this ?
    Saludos desde Mexico,
    Diego

    Hi,
    Your way of doing will work, but you'll get maintenance issues (to many catalogs, and catalog link to maintain for each user).
    The other way is to built your views in CCM, and assign these views to the users, either on the roles (PFCG) or on the user (SU01). The problem is that with CCM 1.0 this is limitated, cause you'll have to assign one by one the items to each view (no dynamic or mass processes), it has been enhanced in CCM 2.0.
    My advice:
    -Challenge your customer about views, and try to limit the number of views, with for example strategic and non strategic
    -With CCM 1.0 stick to the procurement catalogs, or implement BADIs to assign items to the views (I experienced it, it works, but is quite difficult), but with a limitated number of views
    Good luck.
    Vadim

  • Best Practice to implement row restriction level

    Hi guys,
    We need to implement a security row filter scenario in our reporting system. Following several recommendations already posted in the forum we have created a security table with the following columns
    userName  Object Id
    U1             A
    U2             B
    where our fact table is something like that
    Object Id    Fact A
    A                23
    B                4
    Additionally we have created row restriction on the universe based on the following where clause:
    UserName = @Variable('BOUSER')
    If the report only contains objects based on Fact table the restriction is never applied. This has sense as docs specify that the row restrictions are only applied if the table is actually invoked in the SQL statement (SELECT statment is supposed).
    Question is the following: Which is the best practice recommended in this situation. Create a dummy column in the security table, map into it into the universe and include the object in the query?
    Thanks
    Edited by: Alfons Gonzalez on Mar 8, 2012 5:33 PM

    Hi,
    This solution also seemed to be the most suitable for us. Problem that we have discover: when the restriction set is not applied for a given user (the advantage of using restriction set is the fact that is not always applied) the query joins the fact table with the security table withou applying any where clause based on @variable('USER'). This is not a problem if the secuity table contains a 1:1 relationship betwwen users and secured objects , but (as in our case) relathion ship is 1:n query provide "additional wrong rows".
    By the moment we have discarded the use of the restriction sets. The effect of putting a dummy column based on the security table may have undesired effects when the condition is not applied.
    I don't know if anyone has found how to workaround this matter.
    Alfons

  • Best practice to integrate the external(ERP or Database etc) eCommerce data in to CQ

    Hi Guys,
    I am refering to GEOMetrixx-Outdoors project for building eCommerce fucntionality in our project.
    Currently we are integrating with an ERP system to fetch the Product details.
    Now I need to store all the Product data from ERP system in to our CRX  under etc/commerce/products/<myproject> folder structure.
    Do I need to create a csv file structure as explained in the geometrixx-outdoors project  and place it exactly the way they have mentioned in the documentation? By doing this the csvimporter will import the data in to CRX and creates the Sling:folder and nt:unstructured nodes in to CRX?
    Please guide me  which is this best practice to integrate the external eCommerce data in to CQ system to build eCommerce projects?
    Are there any other best practices ?
    Your help in this regard is really appreciated.
    Thanks

    Hi Kresten,
    Thanks for your reply.
    I went through the eCommerce framework link which you sent.
    Can you get me few of the steps to utilise eCommerce framework to pull all the product information in to our CRX repository and also  how to synchronise between the ERP system and CRX data. Is that we have a scheduling mechanism to pull the data from our ERP system and synch it with CRX repository?
    Thanks

  • Best Practice for Distributed TREX NFS vs cluster file systems

    Hi,
    We are planning to implement a distributed TREX, using RedHat on X64, but we are wondering which could be the best practice or approach to configure the "file server" used on the TREX distributed environment. The guides mention file server, that seems to be another server connected to a SAN exporting or sharing the file systems required to be mounted in all the TREX systems (Master, Backup and Slaves), but we know that the BI accelerator uses OCFS2 (cluster file systems) to access the storage, in the case of RedHat we have GFS or even OCFS.
    Basically we would like to know which is the best practice and how other companies are doing it, for a TREX distributed environment using either network file systems or cluster file systems.
    Thanks in advance,
    Zareh

    I would like to add one more thing, in my previous comment I assumed that it is possible to use cluster file system on TREX because BI accelerator, but maybe that is not supported, it does not seem to be clear on the TREX guides.
    That should be the initial question:
    Aare cluster file system solutions supported on plain TREX implementation?
    Thanks again,
    Zareh

  • Best practice in migrating to a production system

    Dear experts,
    Which is the best practice to follow during an implementation project to organize development, quality and production environment?
    In my case, considering that SRM is connected to the back-end development system, what should be done to connect SRM to back-end quality environment:
    - connect the same SRM server to back-end quality environment, even if in this case the old data remain in SRM or
    - connect another SRM server to back-end quality environment?
    thanks,

    Hello Gaia,
    If yo have 3 landscape, backend connection should be like this.
    SRM DEV   - ERP DEV
    SRM QAS   - ERP QAS
    SRM PRD - ERP PRD
    If you have 2 landscape.
    SRM(client 100) - ERP DEV
    SRM(client 200) - ERP QAS
    SRM PRD         - ERP PRD
    Regards,
    Masa

Maybe you are looking for

  • Follow-up Activities in Recruitment...

    Hi Experts, I have a query regarding 'Recruitment-PB60'. When I try to do 'Follow-up Activities', it is not working & giving me an error- 'Activity type 007 not maintained (Choose another Entry) & likewise Activity Type 018 & 016 not ..............'

  • Trying to access methods from a .class file by creating instance of class

    Hey all, I'm hoping you can help. I've been given a file "Input.class" with methods such as readInt(), readString(), etc. I have tried creating instances of this class to make use of it, but I receive the error "cannot find symbol : class Input". If

  • If I turn on 3G on my iPhone, is it going to be free if i have unlimited data plan?

    I have unlimited web usage & data plan, so is 3G going to be free if I turn it on? I've been having my iPhone on "Edge" ( the e icon on top of my iphone instead of 3G)

  • IPad, Blogger, & Amazon Carousel Widget

    A widget shows up fine on an iMac in Mac OS, but does not appear on the iPad. Specifically, this is what is used: Safari Google Blogger Amazon Carousel Widget (displays books) Blogger in Safari on the iMac with Mac OS 10.7.3 displays the widget as ex

  • No backup found

    I erased my iPhone and when I sign in online I can see my old content like the notes and contacts but when I try to restore them from the iCloud, it does not show any back up. What do you think is the problem?