My paradigm shift to Qt

I fist tried Mandrake in 2004, making my way through *buntu, mint, xfce, and lxde to finally settling into Arch with Openbox and apps mostly from Xfce or its 'goodies' about 4 years ago. I was so happy with this set up I only realized that Gnome had blown itself asunder well after the fact. But recently the Gnome spazz-typhoon visited itself upon me when some of my favourite programs took up gtk3 and went wonky. I had long been trying to keep everything on gtk as running one light toolkit is supposed to be more efficient on my former pico-itx and current near-death eee pc. I have always begrudged needing skype for my work.
Gtk3 calls the deal off. Since LXDE and Ubuntu Tablet are going qt so am I. gtk is no longer lighter than Qt and to me it sure doesn't seem more mature in any sense. I have abandoned years of Linuxy sentiment invested in Midori, Liferea, Abiword, Thunar and Parole and plunged into the unknown world of a toolkit I had never previously given a second thought.
To me (a user who likes to set things up just the way he likes, but isn't committed enough to exhaustively remember commands and config parameters) and maybe to some others removing gtk2 and gtk3 from my system without installing KDE has been an interesting experiment. First I've had to trade in my favourite programs for unfamiliar upstarts like Qupzilla, Qtfm, SMplayer. Some other minor utilities have been switched out, like Sakura back to xterm, and some stuff laying around rarely used is gone, gparted and gimp. Tint2 goes, unreplaced for now, I'm suspicious of the alternatives, I love Tint2 because with the setup I learned from a brief stint in crunchbang, no screen space is wasted and it looks nothing like Windows or OSX. Obconf and Obmenu are quick and convenient but not necessary. What will take the most getting used to is networking. I'm on a USB 3G adapter so I need to get used to setting up my network the way you don't want anyone you'd like to convert to linux to see. Right now that's the only major drawback. Hopefully with the growth of lxde-qt and ubuntu tablet more and more popular programs will get converted to qt. Also hopefully I'll soon figure out how to run qtconfig and get the qt4 and qt5 programs looking the same. Tea is hideous. Otherwise so far so good.

Qt is a very mature and stable toolkit. And it is by no means slower or heavier than GTK (from my experiences it is actually lighter). From developers point of view, Qt has great documentation and is based on C++, a really great language. Developing Qt apps from my experience involves much less headache than developing GTK apps.
I can tell you some apps I use that are Qt or pure X11, so you might find them useful in your shift to Qt.
Qt:
qtfm - a really great file manager. But, I don't like this new 5.9 version based on Qt 5, it is too bloated for my taste. I think it was forked from original qtfm which was written by Wittfella, an Arch Linux Forums member. You can find the old Qt 4 version (qtfm 5.5) here.
smplayer - offers everything I ever wanted from a video player. The seek feature is great. The subtitle search feature is great. It offers VDPAU acceleration with NVIDIA cards. The alternative is VLC, also a great player.
qbittorrent - an awesome torrent client. It has a style of uTorrent from the old days, and has all the features I would ever need in a torrent client.
X11:
bmpanel2 - great light panel. A bit hard to configure (because it requires bitmaps for themes), but once you configure it you're all set. Mine sort of looks like tint2.
loliclip - an awesome X11 clipboard manager. The only one I found that actually works without bugs and offers synchronizing PRIMARY and CLIPBOARD.
urxvt - no explanation needed.
compton - if you're into lightweight eye candy, this compositing manager actually works, offers vsync and doesn't slow down your system.
P.S.
What keeps GTK2 on my system is an internet browser. Chromium still uses GTK2, and there is no decent Qt browser to replace it yet.
Last edited by karabaja4 (2013-08-03 14:08:29)

Similar Messages

  • Paradigm Shift: the WDP Model & the Power to Bind

    As a developer coming from an OO/java background, I recently started to study and use the Java Web Dynpro framework for creating enterprise portal applications.
    Up to this point, I've developped 2 or 3 WDP projects - and in so doing, I've tried to reconciliate my java-influenced development methods with the SAP way of doing things. I'd say for the most part it was rather painless. I did, however, find a serious problem as far as I'm concerned in the way SAP has promoted the use of the java bean model importer.
    <a href="https://www.sdn.sap.com/irj/sdn/weblogs?blog=/pub/u/251697223">david Beisert</a> created this tool and presented it to the SDN community in 2004 in his <a href="/people/david.beisert/blog/2004/10/26/webdynpro-importing-java-classes-as-model The same year (don't know if it was before or after), SAP published '<a href="https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1f5f3366-0401-0010-d6b0-e85a49e93a5c">Using EJBs in Web Dynpro Applications</a>'. Both of these works presented simplified examples of invoking remote functions on EJB backends (an add() function in the case of David Beisert's example, and a calculateBonus() function in the case of the SAP publication). Accordingly, they both recommended the use of the Command Bean pattern as an implementation strategy for their respective examples. Which I don't totally disagree with, in these particular circumstances. A simple execute() method is perfectly suitable if one needs to EXECUTE a remote function call - whether it be a calculate() method invoked on a EJB Session Bean or an RFC call made to some remote ABAP system.
    Problem is, not everything in life is a function call ! To me, it makes very little sense to model everything as a command if it doesn't match your business model. The needs of your application should dictate the architecture of your model and not the other way around.
    This unjustifiable fixation on the Command Bean pattern is probably to blame for the fact that very little up to this point seems to have been written on the subject of the power of the binding mecanism as a most powerful tool in the arsenal of the Web Dynpro developer.
    What's this ?
    Binding can make it possible to abstract away most of the nitty gritty context node navigation and manipulation logic and replace it with more intuitive and more developer-friendly model manipulation logic.
    There was a time when programs that needed persistence were peppered with database calls and resultset manipulation logic. Hardly anyone codes like that anymore.. and with good reason. The abstraction power of Object Oriented technologies have made it possible to devise human friendly models that make it possible for developers to concentrate on business logic, and not have to waste time dealing with the low-level idiosyncrasies of database programming. Whether it be EJBs, JDO, Hibernate... whatever the flavour... most serious projects today utilize some sort of persistence framework and have little place for hand-coding database access logic.
    I feel that the WD javabean model offers the same kind of abstraction possibilities to the Web Dynpro developer. If you see to it that your WD Context and javabean model(s) mirror each other adequately, the power of binding will make it possible for you to implement most of your processing directly on the model - while behind the scenes, your context and UI Elements stay magically synchronized with your user's actions:
    +-------------+        +-------------------+         +--------------+        +------------+
    |    Model    |<-bound-| Component Context |<-mapped-| View Context |<-bound-| UI Element |
    +-------------+        +-------------------+         +--------------+        +------------+
                           o Context Root                o Context Root
                           |                             |
    ShoppingCartBean <---- +-o ShoppingCart Node <------ +-o ShoppingCart Node
    {                        |                             |
      Collection items <---- +-o CartItems Node <--------- +-o CartItems Node <-- ItemsTable
      {                        |                             |
        String code; <-------- +- Code <-------------------- +- Code <----------- CodeTextView
        String descrip; <----- +- Description <------------- +- Description <---- DescTextView
    Let's examine an example of this concept. I propose a simple but illustrative example consisting of a shopping cart application that presents the user with a collection of catalog items, and a shopping cart in which catalog items may arbitrarily be added and/or removed.
    The Component and View contexts will be structured as follows:
       o Context Root
       |
       +--o ProductCatalog       (cardinality=1..1, singleton=true)
       |  |
       |  +--o CatalogItems      (cardinality=0..n, singleton=true)
       |     |
       |     +-- Code
       |     +-- Description
       |
       +--o ShoppingCart         (cardinality=1..1, singleton=true)
          |
          +--o ShoppingCartItems (cardinality=0..n, singleton=true)
             |
             +-- Code
             +-- Description
    Let's examine how a conventional Command Bean implementation of this component could be coded. Later on, I'll present a more object-oriented model-based approach. We can then compare the differences.
    public class ProductCatalogCommandBean
       // collection of catalog items
       Collection items = new ArrayList();
       public void execute_getItems()
          // initialize catalog items collection
          items = new ProductCatalogBusinessDelegate().getItems();
    This command bean will serve as a model to which the ProductCatalog node will be bound. This happens in the supply function for that node in the component controller:
    public supplyProductCatalog(IProductCatalogNode node, ...)
       // create model
       model = new ProductCatalogCommandBean();
       // load items collection
       model.execute_getItems();
       // bind node to model
       node.bind(model);
    No supply function is needed for the ShoppingCart node, since it is empty in its initial state. Its contents will only change based on the user adding to or removing items from the cart. These operations are implemented by the following two event handlers in the view controller:
    public void onActionAddItemsToCart()
       // loop through catalog items
       for (int i = 0; i < wdContext.nodeCatalogItems().size(); i++)
          // current catalog item selected ?
          if (wdContext.nodeCatalogItems().isMultiSelected(i))
             // get current selected catalog item
             ICatalogItemsElement catalogItem = wdContext.nodeCatalogItems().getElementAt(i);
             // create new element for ShoppingCartItem node
             IShoppingCartItemsElement cartItem = wdContext.createShoppingCartItemsElement();
             // initialize cart item with catalog item
             cartItem.setCode       (catalogItem.getCode());
             cartItem.setDescription(catalogItem.getDescription());
             // add item to shopping cart
             wdContext.nodeShoppingCartItems().addElement(cartItem);
    public void onActionRemoveItemsFromCart()
       // loop through cart items
       for (int i = 0; i < wdContext.nodeShoppingCartItems().size();)
          // current shopping cart item selected ?
          if (wdContext.nodeShoppingCartItems().isMultiSelected(i))
             // get current selected item
             IShoppingCartItemsElement item = wdContext.nodeShoppingCartItems().getElementAt(i);
             // remove item from collection
             wdContext.nodeShoppingCartItems().removeElement(item);
          else
             // process next element
             i++;
    From what I understand, I believe this is the typical way SAP recommends using Command Beans as a model in order to implement this type of simple component.
    Let's see how the two same event handlers could be written with a more comprehensive object model at its disposal. One whose role is not limited to data access, but also capable of adequately presenting and manipulating the data that it encapsulates. (The actual code for these model beans will follow)
    // I like to declare shortcut aliases for convenience...
    private ProductCatalogBean catalog;
    private ShoppingCartBean   cart;
    // and initialize them in the wdDoInit() method...
    public wdDoInit(...)
       if (firstTime)
          catalog = wdContext.currentNodeProductCatalog().modelObject();
          cart    = wdContext.currentNodeShoppingCart  ().modelObject();
    Now the code for the event handlers:
    public void onActionAddItemsToCart()
       // add selected catalog items to shopping cart items collection
       cart.addItems(catalog.getSelectedItems());
    public void onActionRemoveItemsFromCart()
       // remove selected shopping cart items from their collection
       cart.removeItems(cart.getSelectedItems());
    I feel these two lines of code are cleaner and easier to maintain than the two previous context-manipulation-ridden versions that accompany the command bean version.
    Here's where the models are bound to their respective context nodes, in the Component Controller.
    public supplyProductCatalogNode(IProductCatalogNode node, ...)
       node.bind(new ProductCatalogBean(wdContext.getContext()));
    public supplyShoppingCartNode(IShoppingCartNode node, ...)
       node.bind(new ShoppingCartBean(wdContext.getContext()));
    Notice that a context is provided in the constructors of both models (a generic context of type IWDContext). We saw earlier that our model needs to be able to respond to such requests as: catalog.getSelectedItem(). The user doesn't interact directly with the model, but with the Web Dynpro UI Elements. They in turn update the context... which is where our model will fetch the information it requires to do its job.
    Also note that a model is provided for the shopping cart here, even though it has no need to access or execute anything on the back-end. Again, the model here is not being used as a command bean, but rather as a classic object model. We simply take advantage of the power of binding to make ourselves a clean and simple little helper that will update for us all the relevant context structures behind the scenes when we tell it to.
    Here are the ShoppingCartBean and ProductCatalogBean classes (I've omitted a few getter/setter methods in order to reduce unnecessary clutter):
    public class ShoppingCartBean
       Collection items = new ArrayList();
       IWDNode    itemsNode;
       public ShoppingCartBean(IWDContext context)
          // initialize shortcut alias for ShoppingCartItems node
          itemsNode = context.getRootNode()
                             .getChildNode("ShoppingCart", 0)
                             .getChildNode("ShoppingCartItems", 0);
       public void addItems(Collection items)
          this.items.addAll(items);
       public void removeItems(Collection items)
          this.items.removeAll(items);
       public Collection getSelectedItems()
          return ItemDTO.getSelectedItems(itemsNode);
    public class ProductCatalogBean
       Collection items;
       IWDNode    itemsNode;
       public ProductCatalogBean(IWDContext context)
          // fetch catalog content from back-end
          items = new ProductCatalogBusinessDelegate().getItems();
          // initialize shortcut alias for CatalogItems node
          itemsNode = context.getRootNode()
                             .getChildNode("ProductCatalog", 0)
                             .getChildNode("CatalogItems", 0);
       public Collection getSelectedItems()
          return ItemDTO.getSelectedItems(itemsNode);
    Notice that both classes delegate their getSelectedItems() implementation to a common version that's been placed in the ItemDTO class. It seems like a good place to put this type generic ItemDTO-related utility.
    This DTO class could also have been used by the Command Bean version of the event handlers.. would reduce somewhat the number of loops. At any rate, the ItemDTO class shouldn't be viewed as an "overhead" to the model-based version, since it usually will have been created in the J2EE layer,for the marshalling of EJB data (see <a href="http://java.sun.com/blueprints/corej2eepatterns/Patterns/TransferObject.html">Data Transfer Object Pattern</a>). We just take advantage of what's there, and extend it to our benefit for packaging some common ItemDTO-related code we require.
    // DTO made available by the EJB layer
    import com.mycompany.shoppingcart.dto.ItemDTO;
    public class ItemDTO extends com.mycompany.shoppingcart.dto.ItemDTO
       String code;
       String description;
       public ItemDTO()
       public ItemDTO(String code, String description)
          this.code = code;
          this.description = description;
       // returns ItemDTOs collection of currently selected node elements
       public static Collection getSelectedItems(IWDNode node)
          // create collection to be returned
          Collection selectedItems = new ArrayList();
          // loop through item node elements
          for (i = 0; i < node.size(); i++)
             // current item element selected ?
             if (node.isMultiSelected(i))
                 // fetch selected item
                 IWDNodeElement item = node.getElementAt(i);
                 // transform item node element into ItemDTO
                 ItemDTO itemDTO = new ItemDTO(item.getAttributeAsText("Code"),
                                               item.getAttributeAsText("Description"));
                 // add selected item to the selectedItems collection
                 selectedItems.add(itemDTO);
          return selectedItems;
    Notice that the getSelectedItem() method is the only place in our model where context node navigation and manipulation actually takes place. It's unavoidable here, given that we need to query these structures in order to correctly react to user actions. But where possible, the business logic - like adding items and removing items from the cart - has been implemented by standard java constructs instead of by manipulating context nodes and attributes.
    To me, using a java bean model as an abstraction for the Context is much like using EJBs as abstractions of database tables and columns:
                         abstracts away
               EJB model --------------> database tables & columns
                         abstracts away
      WDP javabean model --------------> context  nodes  & attributes
    Except that a javabean model (residing in the same JVM) is much more lightweight and easy to code an maintain than an EJB...
    Before concluding, it might be worth pointing out that this alternative vision of the Web Dynpro Model in no way limits the possibility of implementing a Command Bean - if that happens to suit your business needs. You will of course always be able to implement an execute() method in your WDP Model if and when you feel the need to do so. Except that now, by breaking free of the mandatory Command Bean directive, you are allowed the freedom to ditch the execute() method if you don't need such a thing... and instead, replace it with a few well-chosen operations like getItems(), addItems(), removeItems(), getSelectedItems()... which, as we've just seen can add significant value to the javabean model made available to your WDP component.
    Comments would be appreciated on this issue (if anyone has had the time/courage/patience to read this far...;). Am I alone here intrigued by the potential of this (up until now) scarcely mentionned design strategy ?
    Romeo Guastaferri

    Hi Romeo,
    thanks for sharing this with the community. I am little bit surprised that the command pattern was understood as the only way on how to use the Javabean model in conjunction with EJBs. The command pattern blog of mine was just a very simplified example of how a functional call can be translated to a Java Bean model. Actually it was to show how the paradigm of a model works. I personally use a similar approach to yours. It seldomly makes sense to map an EJB method one to one to a model, but the javabean model must be driven by the Userinterface and represents a bridge between the business service layer and the ui. I personally even think that often it does not make sense to map RFC function like they are to the Web Dynpro Context. Most often you end up writing ZBAPIs that return structures like they are used in the UI. But if you use a java bean model as a layer in between your service layer, you are more flexible in evolving the application. Anyways design patterns for the java bean model need to be discussed more on SDN as they really add very valuable possibilities you would never have when working with value nodes alone. With the Javabean model we are back in the real OO world where things like inheritance work, things that are really not too well supported by the native WD features. I encapsulate every context of mine as javabeans. This has nothing to do with EJBs (which I am personally not a fan of) but only with the fact that I want to work with the power of the OO world.
    rgds
    David

  • Move clips from timeline to event sub folder?

    Aloha,
    Is there a way to move clips in the timeline that I have cleaned into a sub folder in an event library?
    I come from a non FCP background and still trying to wrap my mind/workflow around the FCP formula.  Generally I clean clips and drop "some" into a "to be used later" folder. 
    Example - While on location I might shoot some location shots while im waiting around.  The locations shots will use in the last 3 mintues of a 25 minute finished project.  When I get to post I edit and clean all clips in the order they are received.  When I find the location clips I simply move them to a folder labled "to be used later".  When I get to the section of the project/timeline were the clips are needed I drag them or the entire folder into the timeline.  
    I see how to make a folder inside an event but I am not able to drag a clips from the timeline to this created folder.  I have tryied to hold down Shift, CMD, OPT, CTRL and have no sucess.
    Thanks you for your time and help.
    Frederick
    Marbelle

    Aloha Fox,
    Thanks for taking the time to answer my questions.  I feel you really understood my concern and gave me some good viable options.
    There is a lot for me to learn and unlearn. I have been editing for over 16 years and most of that time I was using Edius. This is big paradigm shift coming to the Mac and FCPX. 
    The idea for me is to edit the video and move the "to be used later clips" into a bin/folder instead of moving them to the location in the timeline were they will be used at that very moment.  Its just a preference, I know plenty folks would just move the clip to the location.  Edit software is just a tool I have to decide if FCPX is the right tool for me. 
    Thank you again Fox for your time and understanding of my question. 
    Be blessed,
    Frederick
    Marbelle

  • Automatic Filing With Rules - Long

    This is a long post, sorry, but I think the details might be worthwhile for someone who might be like me and wants to automate their mail processing as much as possible. I do have a question below about how I might better configure some rules, and about whether I might be able to streamline my workflow.
    I spent all day yesterday and most of the day today reorganizing my mailboxes, rules, smart mailboxes, etc., to clean up all my mail. I have messages dating back to 1999, and since I basically just color coded everything with rules, rather than filing messages in mailboxes, my inbox and sent messages were starting to approach the 2GB limit. Some of the earlier messages came straight from Claris Emailer (talk about old school), and then through all versions of OS X Mail from 10.0 to the current 10.4.9. It's easy enough to move stuff from my inbox into a new mailbox, but not quite as much so for sent messages. It made searching more difficult, and it is still a manual process.
    Because I have so many messages (about 10,000 in total), I didn't want to have to manually go through everything to move them into separate organized mailboxes. So I created a whole new set of rules to make the filing as automatic as possible, and to keep it relatively automatic in the future. Of course rules can't catch everything, but doing as much as possible automatically, then letting me sort through what's left is certainly preferable. Unfortunately, it's taken so long to figure out because of Mail's completely inadequate rules for filtering and smart mailboxes. I poured through this forum looking for solutions to the various roadblocks I hit along the way, and I thank all of you who are very knowledgeable and have posted helpful replies to other people asking questions. Below is the best solution I've been able to come up with, and I'm certainly open to any suggestions to make the rules less risky and streamlined. I realize that AppleScript could probably make a lot of these things easier, or some of the shareware programs out there, but while I consider myself a very adept power user, I can't write scripts, and I prefer to keep add-ons to a minimum.
    The first change to my account settings and such was to enable the "Automatically CC myself" option. This is the only viable way I can find to process my sent messages without having to manually apply rules after the fact. I am aware of Andreas Amann's "Filter Sent Messages" AppleScript. I tried it out, but it was constantly choking on all my sent messages. I suppose it would work ok now that I have my sent mailboxes cleaned out, but I think I prefer the CC option because it assures that I have copies of all my sent messages at both my home and work computers. I also installed MailEnhancer to make sure that Mail's dock badge counts all unread messages in any mailbox, not just the Inbox. Now the fun stuff begins.
    First of all, I want to state that since my goal here is to stay away from the mailbox size limit (and also to speed up day-to-day use of Mail), smart mailboxes aren't really an option. I need my messages sorted to "real" mailboxes for their long-term storage. Also, simply dumping mail into yearly archives can be a big hassle too, because I have several projects/mailboxes/whatever that may not have a ton of messages and shouldn't really be split up arbitrarily by year. Thus, the main focus here is on rules. I have two primary e-mail accounts, home and work, that I check regularly from both locations. I have my home mail broken down into categories like bicycling notices, family, apartment landlord, photography side business, and streetcar/railroad website. I also set up mailboxes for miscellaneous stuff that doesn't fit into those categories. For work, I have several mailboxes for project specific correspondence, office-wide e-mails and announcements, office computers, our website, and again miscellaneous.
    Here's the other big paradigm shift. All these mailboxes contain both sent and received mail. This is especially valuable for the project work, because you can quickly look at the list of messages and follow the back-and-forth conversation. I originally started out having "sent" and "received" mailboxes for each project and other category that I've already listed, but when they're open it triples the size of the directory tree. Since there doesn't appear to be any way to make a mailbox/folder display all the messages contained inside its subfolders, I don't see much point in making a whole bunch of subfolders for that since I'd constantly have to open and close folders to get at anything. Also, with a more flat structure, I can quickly and easily see what unread messages are for what projects, and that also tells me if the rules are working well.
    So how do I differentiate sent versus received easily? The first rules I set up are to color the background of all messages that come into my home account. This would apply to all messages that come in to that account, whether it's one of my sent message CC's or not. The next rule catches the sent messages by looking to see if the account and the from address match. In that case it colors the message subject text, marks the message as read, and also flags it. Flagging is necessary to create smart mailboxes down the line that can differentiate between sent and received messages, since there's no facility for using the color of a message as a condition in a rule or smart mailbox. These two rules are repeated with different colors for my work account.
    Since none of the rules have moved any messages yet, further rules are free to process them. Basically I have one rule for each mailbox with various amounts of "any" criteria. For most situations it's pretty straightforward, but I have run into some difficulties with Mail's anemic rules. For example, for my family mailbox, I set a condition where "Sender is member of Group" "Family" that I set up in Address book. So far so good, but since I want this to grab sent messages to family as well, I need an "Any Recipient is member of Group" condition, which doesn't exist. The only way I can see to do this is to manually add all my family members' individual e-mail addresses as separate "any" conditions. That would make the rule look like this:
    If "any" of the following conditions are met:
    "Sender is member of Group" "Family"
    "Any Recipient" "Contains" "[email protected]"
    "Any Recipient" "Contains" "[email protected]"
    "Any Recipient" "Contains" "[email protected]"
    "Any Recipient" "Contains" "[email protected]"
    etc.
    Perform the following actions:
    "Move Message" to mailbox: "Family"
    Of course, what it SHOULD be is this:
    If "any" of the following conditions are met:
    "Sender is member of Group" "Family"
    "Any Recipient is member of Group" "Family"
    Perform the following actions:
    "Move Message" to mailbox: "Family"
    This one is tripping me up a bit, and I'm not sure how to use address book groups effectively for sent messages. This is also an issue for some of my work project e-mails, since conceivably I'd be sending messages to and receiving messages from the same group of people. For larger projects, my list could grow enough to where I can't add anymore conditions to the rule, which I know is a well-discussed bug (or lack of good coding anyway). Nevertheless, it seems that in order to capture a message with a specific e-mail address, I need to have two conditions for each address, one "from" and one "any recipient". That kinda stinks, and I'd like to find a better way to do it. Any ideas?
    Right now my last rules are to clean up any messages left over that the other rules didn't catch. It makes the color a bit darker and moves the messages to the appropriate "other" mailbox for that account. This ensures that my inbox is always empty. It also keeps my sent mailboxes cleaned out as well. Since I will get CC's of all my sent messages, I can just change my account preferences to delete sent messages after a day or a week.
    That's it for the heavy moving and lifting. With a few of my extra accounts that don't need any special filtering and moving around, I was able to accomplish this with about 30 rules. There's 2 rules per account at the top of the list to color and flag incoming and outgoing messages, and two rules per account at the bottom of the list to file any messages that don't meet the criteria for any of the other mailboxes. All the actual filing away into specific mailboxes happens in the rules in the middle of the list. What I really need to do though is figure out a way to simplify some of the conditions in those filing rules, like the family issue I mentioned above.
    I have set up a few smart mailboxes to display all my unread messages, recent received mail, recent sent mail, all recent mail, all received mail, all sent mail, and all mail. The "All Mail" mailbox's criteria are just that the message is not in one of my junk mail folders. This is my fail-safe, since even if something is mis-categorized or even unfiltered, I will still see it here. "All Sent Mail" collects all messages that are flagged. Flagging is the only way I can find to view just sent messages without having separate "sent" subfolders in all mailboxes. A benefit to doing it this way is that if I add or change any of my hard mailboxes, I don't have to edit the smart mailbox to keep it current. I can hid the flag column and it won't reappear by itself, so I don't have to see the flag. "All Received Mail" is the tricky one. As mentioned elsewhere on the forum, you can't set up a smart mailbox to get messages that are NOT in another smart mailbox. This rule needs to include all messages that are not in the "All Sent Mail" smart mailbox or any of my junk folders. I tried the trick at first where you pick "Message is in Mailbox" and then select your smart mailbox, then go back and change the criteria to "Message is not in Mailbox". When I'd close the rule though, the smart mailbox I chose in the pull-down menu would change to a different one, and the logic would fail. I got it to work by trashing my smart mailboxes .plist file and just remaking them. That did the trick. The rest of my smart mailboxes just add a time limit to their list of criteria to keep the list shorter, and the unread smart mailbox is self-explanatory.
    So what these smart mailboxes do is they let me see anything from just my unread messages to every single message I've ever sent or received in one window. They're all colored accurately and sortable by sent versus received. If I keep my "Recent Received Mail" mailbox as the active one, it looks just like my inbox did before I changed all this stuff around. The only difference is that this particular mailbox automatically displays only messages from the last 3 months, and I don't have to manually clean it up every so often. This makes opening mail itself and navigating recent messages quicker. In the end, I can still see the messages the same way I always have been, but since they're all stored in different hard mailboxes I'll be able to avoid the 2GB limit by keeping everything well categorized. Nevertheless, the logic required to make this all work is quite cumbersome, and I worry about it not working correctly down the road as Mail is updated. I'd appreciate any thoughts you all have on making this work better, and about my family rule problem.
    Thanks all

    > As to marking sent messages versus received, if you didn't do that,
    how would you be able to tell them apart easily?
    It’s the need to tell them apart that I don’t understand...
    You'd want to be able to tell them apart for the same reasons
    you'd want to tell anything apart.
    But I can already easily tell them apart without having to tag, colorize, or store them in separate mailboxes, so why waste an attribute of a message to store redundant information? It’d be better to reserve that attribute for storing information that cannot be made readily available by other means, e.g. subjective properties such as the importance of the message that you mention.
    One instance I can site is if someone asks "did you send me such-and-such?"
    In that case, I can, for example, type the first letters of the name or address of that person in the search box, then click All Mailboxes and To in the banner that appears between the toolbar and the message list. Mail shows me what I’m looking for without me having to leave the mailbox I was looking at nor wonder on which specific mailbox are those messages actually stored. Having all those messages tagged or colorized the same way would be useless or even distracting/annoying.
    Unless you have a specific mailbox for storing only the correspondence with that person AND you can easily locate and select that mailbox amongst all the other mailboxes, I don’t see how could it be more convenient looking through a mailbox where messages can be told apart based on whether they’re sent or received but not on who sent/received them...

  • Why can't I use iBooks Text Books across devices?

    iBooks 2 is incredible with its introduction of Text Books, as is the ability for anyone with a Mac to start developing texts via iBooks Author.  This has revolutionary potential written all over it. 
    BUT, just one thing Apple: 
    Why in the last year have you pushed so hard to integrate/sync devices via the Cloud, only to introduce a phenomenal tool like the Text Books which can only be used on one device, the iPad, and almost blatantly makes no intention to talk to other devices?
    I have invested in the whole Apple product line.  I am invested in the ecosystem entirely.  Why can I not start reading/note taking from my Biology Text in class on my iPad, continue skimming it from where I left off on my iPhone while going home on the train, and finally pull up the Text Book in iBooks on my Macbook (complete with my notes from the day) where I ultimately want to cut and paste my noted text to be used as quotes in a term paper I am composing in Pages (and what if Pages even talked to iBooks and auto formatted a Bibliography based on the text I have dragged and dropped)?   As a student this scenario is not only very forseeable, but neccessary if I am to go over to a digital book format.
    Why is this not possible?  Should I honestly believe it is the result of poor planning, or rather a strategy to boost iPad sales by making Text Books explicit to the iPad?
    After all, in a parallel analogy to the above, Apple ecosystem users can currently:  start a film on their iPad, pick up where they left off on their iPhone, and then ultimately finish the film on their Mac or Apple TV.  I can even start a  normal book on my iPad and leave a bookmark, which I can later pick up on my iPhone. This is awesome and exemplifies the consideration and rigorous thought that I have grown to expect Apple to put in to their products.  The devices talk to each other and "it just works."  But is apple really allowing all of this, except when it comes to TEXTBOOKS, which is an educational tool? 
    If the digital Text Books are really going to do so much to revolutionize education, then shouldn't Text Books be available across ALL devices and to all people who ALREADY own iOS/OSX devices in order to reach its maximum potential for influencing a change in education? If these tools are really so revolutionary  and represent a paradigm shift, then they will speak for themselves and schools will naturally get on board and dole out the money for the class sets of iPads. 
    Please Apple, I switched over for a reason.  Mostly due to your philosophies and ability to make things that "just work".  Don't sell out here.   Not allowing Text Books to be accessed across devices does not have the markings of a "paradigm shift in the dark ages of education", but instead reeks of a slick marketing scam  by the most successful company in the world to pump tax money out of education budgets into the overflowing pockets of Apple.  Shame on you.  That is not giving to education, that is taking. 
    Make good and develop TextBooks to work across ALL iOS/OSX devices, please. 
    Respectfully,
    A teacher/student

    My understanding is that you can use the Cloud on any device you want but only one at a time. Just as you can install the Creative Collections on as many computers as you wish but it can only be activated on two at a time but which is not really true since you can only use one of them at a time anyway.
    I think the wording about this should change.
    For insance I travel and may or may not wish to lug a laptop with me. It is my understanding that I can when raveling log into the CC download m apps and use them on any computer capable of running the apps and work away, But since I am logged in at a different location on a different device it will only work on that device.
    And this is the way it shoulkd work because say you travel and your laptop is lost or stolen then you should be able to still use your apps and also stop them from working on the lost computer by logging in somewhere else.
    This is the way Iunderstand the advantages of the Cloud. It is designed to be flexible and not inhibiting.
    Now on the other had if I ahsd an assistant that I want to use these apps then it makes sense to purchase a license for them as well.
    It is one license for one user as it should be but that user can access it where ever they need it.
    That is my understanding.

  • Getting the classic "Macintosh Basics" tutorials to run in OSX (and Linux?)

    Good afternoon folks,
    I remember WAY back in the early days that there were really good basic tutorial programs that game with your new macintosh to teach you the basics. If my memory is correct, I THINK my favourite was the earliest one, "Mouse Basics". All I remember is that, as a kid, the first tutorial I had on a Mac was this silly little fun game to teach how to use a mouse, how to click, how to drag, etc. I vaguely remember a fishbowl was involved somehow, and there was another part where you dragged a piece of paper from a desk into a garbage can.
    Heck, I think there may have been one for the pre-Mac Apples that was also really good.
    Today, I have two uses for these programs, and I want to figure out how to get them to work on modern hardware.
    1) My dad, believing the hype about OSX being the "easiest computer in the world to use" went and bought himself a really expensive iMac, thinking it would do EVERYTHING for him. Remember the scene from Star Trek IV where Scotty sits in front of the Mac Plus and says, "computer," into the mouse. That's pretty much the mindspace where my dad's coming from. Plus, my mom's even worse, and is having a really hard time grasping the concept of using a mouse. They want me get Skype working on their computer, but when I tried to help them out I discovered just how much they need to learn before they get CLOSE to the point where they can use Skype comfortably.
    But, because all this stuff is so automatic for me, because I've been using computers since around 1986 when I got my first C=64, I cannot figure out how to verbalize these concepts into words in a way they can understand. Trying to explain to them the concept of the "desktop" as a metaphor is really hard. To them, a computer is a machine that does something FOR you, like a toaster. To me a computer is a virtual "space" that I "enter" in order to do things for myself.
    For example, they might ask something like, "how do I get the computer to do x?" And I would answer, "this is how YOU do x." Or, to put it another way, they might unconsciously think to themselves, "I want the computer to give me the information I ask for," while I would unconsciously think ,"I want to go into the file system and find the information that's stored there." That's why it's called the "Finder" after all, right? It's a philosophical paradigm shift they just can't seem to make. So, they don't want me to teach them how to use Skype. They want me to teach Skype how to work for them! Somehow I cannot explain to them that an iMac is not a HAL 9000...
    So, I want to use these very old basic tutorial programs on OSX as a way to get my parents some practice on the very very very basic skills needed to use a computer. How to click. How to drag. The concept of the "desktop". Etc.
    Anybody have any simple tricks I can try? I suppose I could download QEMU for OSX and then install System 7 on their iMac, but it seems to me that there MUST be a simpler solution.
    2) I'm trying to develop a remastered **** Small Linux livecd for my very young nieces and nephews to help introduce them to computers. The idea is to prevent my silly siblings from wasting their money on those stupid pink plastic "laptops" you can buy at Toys R Us. Instead, they'd simply take their old laptop that they don't use anymore and just boot it up with the DSL livecd. On the cd will be all sorts of age-appropriate games, educational software, and a kid-oriented internet browser (I'm trying to get zacbrowser to run under WINE, so far without any luck.)
    There would be two users built-in to the livecd. If you boot it without using a password you get "kid mode" with a really friendly desktop with large cartoony icons with all the programs for the kid. If you boot it with the preconfigured password you'd get "parent mode" allowing access to preferences, utilities, and the myDSL package installation system.
    Along with giving new life to my brother-in-laws' old laptops, they could also take the livecd with them when they're visiting other folk. Instead of dumping the kids in front of the tv, they can just pop the livecd in the family's computer and the kids can plug away without the ability to touch anything stored on the harddrive.
    Think of it as SugarOS for really old hardware.
    As you've probably guessed by now, I want to include these old Mac tutorials with the CD, and again I'm trying to figure out the best way to get them to run under Linux. Again, I could try running System 7 under QEMU, but I'd really like to find a better way. Is there such a thing as "WINE for classic macs"?
    In the unlikely event that anybody has a way to modify these old programs to run natively under OSX (or Linux), that would be AWESOME!!!
    Or if anybody knows of any modern substitutes to the classic Mac Basics tutorials, that would also be cool (but not nearly as cool as a way to run the original programs. I really really liked the way those suckers did the job.)
    Thanks in advance!

    Hi, and a warm welcome to the forums!
    Couple of thoughts...
    You could get/use SheepShaver to actually run Classic Apps on Intel Macs.
    For "Kid Mode" you could put other's Home folders on USB Sticks big enough.

  • Is it possible to remove the 'soap12' namespace from the WSDL for basicHttpBinding endpoints (i.e soap 1.1 clients)?

    Hello,
    We want to import the WSDL from a running WCF service endpoint (which is using basic HTTP binding) in order to implement the same web service interface in a Lotus Notes application. Unfortunately the proxy generator for Lotus Notes doesn't like the particular dialect of WSDL that our WCF service is exposing, in particular it throws a wobbly when it sees
    xmlns:soap12=http://schemas.xmlsoap.org/wsdl/soap12/.
    Even though soap1.2 isn't being used, the basic http binding is leaving this reference in the wsdl and Lotus Notes seems to think this is a SOAP 1.2 WSDL file even though I know it's a SOAP 1.1 compatible file.
    In order to get around this I'm using disco.exe to pull the wsdl files down locally, manually edit the files to remove this unused name space and then edit all the schema import locations to convert lines such as http://localhost:7780/MEX?xsd=xsd0 to lines such as file://\wsdl\MEX.xsd etc...
    Is there a simple setting somewhere that will remove that 'soap12' fragment from the wsdl? It's a bit of a pain because we are in a development phase and so the WSDL isn't completely stable at the moment and is changing frequently which requires this manual editing stage.
    Thanks
    Paul.

    The following method works correctly for me:
    public class Soap11WsdlExporter : WsdlExporter
        public override MetadataSet GetGeneratedMetadata()
            var metadataSet = base.GetGeneratedMetadata();
            foreach (var metadataSection in metadataSet.MetadataSections)
                var description = metadataSection.Metadata as System.Web.Services.Description.ServiceDescription;
                if (description != null)
                    var namespaces = description.Namespaces.ToArray();
                    var newNamespaces = new XmlSerializerNamespaces();
                    foreach (var ns in namespaces)
                        if (ns.Name.ToLower() != "soap12")
                            newNamespaces.Add(ns.Name, ns.Namespace);
                    description.Namespaces = newNamespaces;
            return metadataSet;
    public class MyServiceHostFactory : ServiceHostFactory
        protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses)
            ServiceHost serviceHost = new ServiceHost(serviceType, baseAddresses);
            var useSoap11 = true;
            //We only removing SOAP12 namespace only if all the bindings use SOAP11.
            foreach(var endPoint in serviceHost.Description.Endpoints)
                useSoap11 &= endPoint.Binding.MessageVersion.Envelope == EnvelopeVersion.Soap11;
            if (useSoap11)
                var behavior = serviceHost.Description.Behaviors.Find<ServiceMetadataBehavior>();
                if (behavior != null)
                    behavior.MetadataExporter = new Soap11WsdlExporter();
            return serviceHost;
    Hope this helps
    Another Paradigm Shift
    http://shevaspace.blogspot.com

  • How to loop a script... (novice in over head?!)

    At the core of it I want to be able to change the filelink from a tif extension to an ai extension if the appropriate file exists. I can do this through a contrived search and replace on the IDML files but it scares the people who work on the files... so I'm left coming up with an InDesign script which mimics this sort of workflow. One of the InDesign scripting references I purcahsed 'InDesign CS5 Automation Using XML and Javascript' by Grant Gamble has a great script for accomplishing this. However the script is written with a single file input in mind and I'd like to extend it to handle a list of files defined by the user.
    I've tried with my sub-novice skills to loop it and I end up with a bunch of null object garble.  I've managed to modify to the point where it will open up the next document in the 'files' list but InDesign doesn't seem to reckognize its existence for lack of a better term.  At the end of the day I might be in over my head, I've struggled with this for too long so at this point I'm hoping someone can help me along and I can backtrack how their solution worked! I feel that this type of solution would help to evolve my understanding of Functions and variable scope.
    I'm likely in need of a paradigm shift! In a perfect world I would like to be able to:
    myFolder = selectDialog();
    files = myFolder.length;
    for(i=0; i<files.length; i++){
    app.open(files[i])
    Run("other script")
    app.activeDocument.close(SaveOptions.YES)
    Here's the "other script":
    // Example
    var g = {};
    main();
    g = null;
    function main(){
        if(app.documents.length == 0){
            alert("Please open the g.document containing the images to be relinked.");
        else{
            intResult = buildDialog();
            if(intResult == 1){
                relinkGraphics();
    function buildDialog(){
        g.doc = app.activeDocument;
        g.win = new Window("dialog", "Re-link Images");
        g.win.add("statictext", undefined, "Choose file extension or enter text:");
        var arrExtensions = [".ai", ".bmp", ".eps", ".gif", ".jpg", ".png", ".psd", ".tif"];
        // From panel
        g.win.pnlFrom = g.win.add("panel", undefined, "Find:");
        g.win.pnlFrom.ddl = g.win.pnlFrom.add("dropdownlist", undefined, arrExtensions);
        g.win.pnlFrom.txt = g.win.pnlFrom.add("edittext");
        g.win.pnlFrom.txt.minimumSize = [200, 20];
        g.win.pnlFrom.ddl.onChange = function(){
            if(this.selection != null){g.win.pnlFrom.txt.text = this.selection.text;}
        // To panel
        g.win.pnlTo = g.win.add("panel", undefined, "Change to:");
        g.win.pnlTo.ddl = g.win.pnlTo.add("dropdownlist", undefined, arrExtensions);
        g.win.pnlTo.txt = g.win.pnlTo.add("edittext");
        g.win.pnlTo.txt.minimumSize = [200, 20];
        g.win.pnlTo.ddl.onChange = function(){
            if(this.selection != null){g.win.pnlTo.txt.text = this.selection.text;}
        // Add button
        g.win.btnAdd = g.win.add("button", undefined, "Add");
        g.win.btnAdd.onClick = updateListBox;
        // Criteria list box
        g.win.lstChanges = g.win.add("listbox", undefined, undefined,
        {numberOfColumns: 2,
         showHeaders: true,
         columnTitles: ['Find:', 'Change to:'],
         columnWidths: [100, 100]
        g.win.lstChanges.minimumSize = [200, 100];
        // Delete button
        g.win.btnDelete = g.win.add("button", undefined, "Delete Selected");
        g.win.btnDelete.onClick = deleteListItem;
        // Manual/Auto radio buttons
        g.win.radManual = g.win.add("radiobutton", undefined, "Manual");
        g.win.radAuto = g.win.add("radiobutton", undefined, "Automatic");
        g.win.radAuto.value = true;
        // Relink and Cancel buttons
        g.win.btnRelink = g.win.add("button", undefined, "Relink");
        g.win.btnRelink.onClick = function(){
            if(g.win.lstChanges.items.length == 0){
                alert("Please specify the relink criteria.");
            else{
                g.win.close(1);
        g.win.btnCancel = g.win.add("button", undefined, "Cancel");
        return g.win.show();
    } // End function buildDialog
    function updateListBox(){
        if(g.win.pnlFrom.txt.text == "" || g.win.pnlTo.txt.text == "" ){
            alert("Please enter text in both boxes.");
        else{
            var itm = g.win.lstChanges.add("item", g.win.pnlFrom.txt.text);
            itm .subItems[0].text = g.win.pnlTo.txt.text;
            g.win.pnlFrom.txt.text = "";
            g.win.pnlTo.txt.text = "";
            g.win.pnlFrom.ddl.selection = null;
            g.win.pnlTo.ddl.selection = null;
    function deleteListItem(){
        if(g.win.lstChanges.selection == null){
            alert("Please select the item to be deleted");
        else{
            g.win.lstChanges.remove(g.win.lstChanges.selection);
    function  relinkGraphics(){
        var strFeedback = "";
        var blnRelink;
        for(i = 0; i < g.doc.allGraphics.length; i ++){
            // Exclude embedded items
            if(g.doc.allGraphics[i].itemLink == null){
                continue;
            // Apply all changes specified by user to filepath of graphic
            strBefore =  g.doc.allGraphics[i].itemLink.filePath.toLowerCase();
            currentLink = g.doc.allGraphics[i].itemLink.filePath.toLowerCase();
            for(j = 0; j < g.win.lstChanges.items.length; j++){
                currentFind = g.win.lstChanges.items[j].text.toLowerCase();
                currentChange = g.win.lstChanges.items[j].subItems[0].text.toLowerCase();
                currentLink = currentLink.replace(currentFind, currentChange);
            // Relink image if applicable
            blnRelink = false;
            if(strBefore != currentLink){
                blnRelink = true;
                if(g.win.radManual.value == true){
                    app.select(g.doc.allGraphics[i].parent);
                    blnRelink = confirm("Relink this image?");
            if(blnRelink == true){
                try{
                    g.doc.allGraphics[i].itemLink.relink(File(currentLink));
                    strFeedback += "Relinked: " + strBefore + " ->\nto: " + currentLink + "\n\n";
                catch(err){
                    strFeedback += "Could not relink: " + strBefore + "\n-> to: " + currentLink + "\n\n";
        // Display user feedback
        if(strFeedback == ""){
            alert("No images were relinked.");
        else{
            var fleFeedback = File(app.activeScript.parent + "/relink-log.txt");
            try{
                fleFeedback.open("w");
                fleFeedback.write(strFeedback);
                fleFeedback.close();
                fleFeedback.execute();
            catch(err){
                alert("Sorry. Due to an error, the log file could not be created.");
    Any and all help is greatly appreciated,

    The full script runs on "the active document", and it's possible that is the point where ID gets confused. Can you re-write it as a function with *one* parameter -- the document to work on?
    Then you can do your loopy (sorry) looping like this:
    someDoc = app.open(files[i]);
    Run("other script", someDoc);
    someDoc.close(SaveOptions.YES);
    "app.open" returns a handle to the ID document once it's opened, and if you save it into a variable you can use this wherever you are using "app.activeDocument" now.

  • I would like to add my Deskjet F4480 printer to my home wireless network. How can this done?

    I have a Linksys wireless router that is the hub of my home wireless network.  I have been successful in adding a HP OfficeJet 6500 All-in-one Printer an old HP DeskJet 890C that I purchased in 1997 with a computer running Windows 95.  The HP 890C has been my tireless workhorse throughout my two kids K-12 school as well as printing reams of paper during my MBA...and guess what: it still works.  However, my daughter has just returned home from college and she has a HP DeskJet F4480 printer/scanner that she would like to add it to the home network.  I have been researching the problem, but haven't been successful in finding out if this printer is compatible to add to a wireless network.  Or simply, will a 802.11g Wireless USB 2.0 Adapter suffice?  I am also adding a new Windows 7 laptop in the network, that will need to print to this F4480.  I patiently await the forum's response.

    My mistake, it seems that price was a deciding factor when I purchased the F4480 in lieu of a more expensive wireless ready printer-scanner.  I will eventually come-up with a solution as I did with the 890C where I have a wireless print server connected directly to the printer serial connection.  It seems that backward compatible is still evident, but I am having some problems coming up with the long-lasting HP49 and HP23 tri-color ink packs. 
    Here is the rub; I don't replace technology because it is out-dated. I upgrade and or repair everything myself. I found warranties not worth the paper their written on and in most cases, the technical support was lacking.  My Win95 desktop went thru several major components until I found that it was becoming harder to find software to run with it.  I finally replaced it in 2007.  Now it seems that there has been a paradigm shift toward utilities that are very hungry for memory, and even if I didn't want to, I am forced to come over to the dark side with a device that has a minimum of 8GB of RAM. 

  • Help getting worse and worse

    When CS5 went out, i spend a lot of time here trying to undertand what happened to the online help.
    I even stopped adding comment and being part of the community help because metric driven design is,
    as far as the help layout is concerned, a very very bad idea.
    My hopes were limited for CS5.5 and I really wanted Adobe to correct this in CS6...
    I wonder what happened with the online help part of Adobe apps in general, and AE specifically.
    I wanted to check some stuff on it, and it’s a complete disaster (at least in french).
    All the links in the "what’s new in cs6" are pointing to Todd’s blog home page, not even to the related post, and it even links to it when the topic is not disscussed at all on the blog. Hopefully, they all comes with another link to the home page of video brain, not even to the correct course or anything.
    I’ll be more specific; For a tutorial I’m recording, I wanted to check some info on the new material properties, because I’m not a 3D guy and wanted to use the correct french words and be sure not to mess-up.
    So I went to the online help home page, got in the what’s new in cs6 page, then scrolled down to new material option where it linked me to Todd’s blog and video brain. Both on the home page. Because I didn’t care about watching a video, and needed to read some stuff in french to get the correct words, I went to the Layer & properties part of the help.
    What a joy, 6 of the 7 topics wheren’t updated since january 1st (meaning no info about new stuff in CS6) and the only part updated is about 3D layers, yay ! So let’s open that page.
    I got 8 topics, none with info about the materials, old or new, only basics info about rotating,moving, 2d/3d layers, intersecting... and, on this "Product affected: Adobe After Effects CS6" page, info about Photoshop 3D layers, a feature removed from the product.
    That’s when I used that handy "search box", because that’s what I was told when I complained about how lame was the CS5 docs navigation compared to the CS4 one: the paradigm shifted from topic related discovery to keyword search related discovery.
    Let’s try to find out some info for the "Options Surface" category.
    it takes me to the community help results: http://community.adobe.com/help/search.html?searchterm=optionsurface&q=optionsurface&lbl=aftereffects_product_adobelr&x=0&y=0&area=0&lr=en_US&hl=en_US
    The first AE related link is 7th on the list despite a search in AE related docs... And on the 4 AE related links, nothing to do with what I need. Yes, I know, I searched the french name, but shouldn’t I get community results in french , or at least, official documention results in french ?
    If I select the "only adobe content" radio button, it’s worse. Less AE links, less relevant.
    If I click "support" instead of Community help, it’s not better, still more links pointing to Illustrator or Photoshop than After Effects. At least, 1 link is related to 3D layers in raytraced environment, but not what I’m looking for.
    Oh yes, now there's a topic browsing page. Let's have a try with that. Guess what ? Couldn’t find the info I was looking for.
    Removing the breadcrumb navigation in CS5 help was an ugly move, but this new help system is just... I don’t know how to say it politely. I haven’t been satisfied with the help system since CS5, when you removed the very usefull, always on context and organised bredcrumb navigation on the left hand side of the screen. I wished you stoped the "search" paradigm and went back to the "organised" one. I know your metrics shows people love searching. But how many search does they have to go through before getting the correct result ? How many unsuccessfull search before they don't bother anymore ? Honestly I’ve never seen such a messy help, and it really is confusing. It even feels like Adobe doesn’t bother about it anymore, and that it’s now relying only on external content, giving the feeling that the only thing up-to-date are the links to videobrain/creativecow/whatever rather than the official content itself. Hopefully, the curators for these links only points to very good tutorials.
    The other thing that is bothering me, is that we get more and more video content instead of text content. Don’t get me wrong, I love watching videos. But I also love to read the docs. It’s much more effective that trying to seek in the video for that special piece of info you are looking for (and there no way to know if the info you’re looking for is indeed in the video or not, or to "search" it's content).
    And don't get me started on the Help App, I never understood why you did it in the first place. Not as inconvenient as the website, but nearly.
    Not having an efficiant documentation is bad. I’ve been asking for an help redesign since CS5, complained in a blog post and on the community help forums. With this new non-sensical online help, where nothing is to be found easily, this will soon become mendatory. If Adobe doesn’t care for help anymore, they should remove it completly instead of making it less and less relevant, and more and more messy. If you have an email address where I can send this rant I’d be glad to let the right people know.
    Seb

    Thx for the reply Terri,
    i'll probably send you a mail with more thoughts, but what I feel from your answer is that you didn't get the major issue I have with the online doc since CS5.
    Sure it's nice to know that you'll fix your broken search engine and that you'll add some filters back (reference only). But the major issue here is your paradigm shift from indexed content to searched only content. Nothing can be found easily. Absolutely nothing. Either in localised versions or in the english version. The layout is nerve breaking, and the search results are really bad (but this is what you are going to fix someday soon).
    For an online help to be efficient we need properly organised and Indexed content first, and then a powerfull search engine to find it if we don't know where to look. Right now, you've wipped out the index & organisation and just piled stuff here and there leaving us the only way you think we want to browse the help: search. And that's the problem. We are not able to browse the help anymore; We can only search it, and we even cannot rely on that.
    Sincerly, go browse the CS4 online help. it was clear, and working like a breath with the always handy, and always displayed breadcrumb navigation. It also featured search. Now, go back to CS6, and tell me where is the improvment. No way to properly separate official docs, from adobe official blog posts, from 3rd party sites/blogs. Everything is mixed-up and unbrowsable. Even some of the "topic pages" makes no sens, as I demoed in my previous post. No way to keep on context. No way to explore more without returning back to the search results. Having more great content from 3rd party is really interesting, but it's poor integration with the official docs makes it an experience breaker.
    When I'm in my app and need help, I need to first and foremost go to the official help, I need it clearly organised and with an understandable and clean layout. And when I'm on my topic and feel I need more info/ in depth knowledge, a link to 3rd party tutorials, free or paying. And it's not the experience I'm getting. And it's the experience I need. I don't understand how it became so messy, but honestly, I've never seen something like that.
    Also, you've increased focus on video content. Yeah watching tutorial is cool. But video is not searchable. Video is not immediate. When I need help, I need to read the answer first to fix my issue, and then, if I feel I need to learn more, go to AdobeTV, Videocopilot, Videobrain & such to learn more stuff. Your enphasis on 3rd party and video content have shrank the volume of quickly available info. We need more text content, and more text content from within the official documentation, not from partners. Because once again, each time i'm redirected to a partner site, I loose context...
    So please, stop focusing on search only, and bring back the much stronger breadcrumb navigation, bring back in front the official documentation, so the info precise, findable, and not dilluted.

  • Letter of Appreciate to Devs, and two requests

    Now that the nouveau and NVidia junk has been  sorted out, I'm back to Arch. (I use it for my consulting business, when the graphics work that is)
    I want to say Thank You,  to the developers for all their hard work.  They get yelled at all too often, and get thanked way too little.
    I'm an older guy (Vietnam Era vet), and I appreciate this distro, more than the others because of it's rolling release nature, and by it's giving control to us, the users, in what is or is not included in our individual systems. As the date to the left of this post uindicates, I am no noob to Arch.
    The only times I've had to use something else is when the graphics sub-system was broken. I had to work, and the CLI did not allow me to do that work. I freelance to make up the difference in income that VA disability doesn't cover.
    I know about the finger pointing at NVidia, Xorg, and choose not to assign blame. I just don't have enough technical info, and I really hate pissing contests anyway, so I choose not get involved.
    The Nouveau drivers in my experience have been unusable in a production environment (especially CAD). I have an NVidia GTX-560Ti card, and on the Nouveau site they don't support it well. 400's and 600's, but the 500's seem to have been passed over for what I will assume are legitimate resons within their dev community.
    You see I use a commercial CAD package I purchased for Linux. I need the graphical sub-system to not break, or I have a work stoppage. Yes, I now dual-boot with Winsucks with the same CAD package installed therein, so work can continue in case of breakage......now..... ( at one time Arch was the only OS installed, until an NVidia driver update broke the entire system and I missed work deadlines). I would welcome the day when I can return to the paradigm of only Arch being installed. Windows 8 is a disaster (in the preview edition I VM'd). I will refuse to use it, as I do Gnome 3.*. (I need a traditional desktop where I can work, with multiple workspaces, and not have to play with a smarphone. I acknowledge that some users prefer this. So be it. I just don't want this paradigm shift shoved down the throats of traditional users.) I prefer MATE over XFCE. More flexibility is the reason, more choices for me the user without having to do a lot of things by hand (hint = gtkrc editing). For now, I use XFCE.
    Request #1 is this:
    As updates go, I humbly request that a little more time and QA happen on the graphics level so "pacman -Syu's" don't break the GUI because of some issue with either NVidia, Xorg, or whomever. And in saying QA, I do not implicate the package maintainer, but am suspicious of NVidia releasing an update that screws the pooch, so QA means a little more testing on NVidia's/Xorg's latest updates before being released to the wild.
    I would welcome the day when nouveau is ready for primetime, and I am saddened that NVidia is not cooperating with them. Considering the Nouveau dev's have had to reverse engineer everything, I'd say they've done a stellar job......but they are not there yet for those of us in a production environment, regardless of which distro one chooses.
    I use Arch also becasue of it's currency, and therein lies the catch 22. Currency vs stability. I also use Arch so I do not have to reinstall every 3-6 months to get current packages, and wasting all the time it takes to install a new release and get it all setup the way one likes. I use Arch also because it gives me control over my system.
    I'm just asking for a tad more attention to detail if possible, to the graphical subsystem when updates occurr. If I had the cash, I buy a "butcher machine" for the sole purpose of being a testing resource for Arch, but seing that I am disabled, cash is tight so I can't help in a meaningful way. And I am NOT blaming the NVidia maintainer. NVidia is just as guilty for not keeping current with xorg developments, etc. I just ask that if an NVidia update breaks things in testing. Warn us on the main page as you do for init updates, etc. Then don't release the problematic update to the wild, or if possible, give us workarounds to implement on our systems to keep us online.
    Request #2 is:
    Please bring the MATE desktop mainline. I've used it in Mint before returning to Arch, and with the GTK 3 libs, it performs well, is compatible with newer GTK3 based apps, yet maintains the traditional desktop for those of us who are business oriented. I left Mint becuase I could get the most current libs needed to run the latest updates of my CAD package, not at least, without butchering the system, and causing a lot of downstream headaches for myself. Ubuntu, Mint's parent is getting prety wierd in how they do things compared to a mainstream distro. Compiling many things manually and adding them to your system causes more downstream issues, than the ones you fix for your immediate problem.
    I know that the latest packages may break something....That's part of the risk of currency.....I am just asking that new packages don't take the system down to it's knees. My definition of "System" = base system plus functional GUI in XOrg.
    Yes, I review the main Arch news before I update. So I am not a "stupid" user of the distro where warnings are ignored, like some I read in the forums.
    I realize that others may have had different experiences than me, and I by no means want to minimize those experiences in this post.
    Again, I appreciate Arch...And I thank all the devs for their volunteer time and effort. Sincerely so.
    Respectfully,
    Dave
    PS: The new install method: Not that hard. I used my netbook so I could read the wiki, while I was reloading my main workstation from scratch to check it out. And in about the same time, I had a new Arch installed. A few rough edges, but all easily fixed. Once the wiki gets fleshed out, I suspect all the rough edges will vanish into thin air.

    dcbdbis wrote:I humbly request that a little more time and QA happen on the graphics level so "pacman -Syu's" don't break the GUI because of some issue with either NVidia, Xorg, or whomever.
    You can do a "test update" without spending any money: Create a 10 GB partition and install Arch Linux on it with the same packages you have on your "real" installation. Just wait to do an update until you have time to deal with possible errors.
    dcbdbis wrote:Please bring the MATE desktop mainline.
    MATE is available through the AUR and through other repositories, as is mentioned on the wiki page. Is there a benefit to having it as part of the official Arch Linux repositories?
    Last edited by drcouzelis (2012-07-27 20:37:51)

  • Unix layout question  single vs. multiple logical volumes

    Hello friends,
    I have a question which I have seen various points of view. I'm hoping you might be able to give me a better insight so I can either confirm my own sanity, or accept a new paradigm shift in laying out the file system for best performance.
    Here are the givens:
    Unix systems (AIX, HP-UX, Solaris, and/or Linux).
    Hardware RAID system on large SAN (in this case, RAID-05 striped over more than 100 physical disks).
    (We are using AIX 6.1 with CIO turned on for the database files).
    Each Physical Volume is literally striped over at least physical 100 disks (spindles).
    Each Logical Volume is also striped over at least 100 spindles (all the same spindles for each lvol).
    Oracle software binaries are on their own separate physical volume.
    Oracle backups, exports, flash-back-query, etc., are on their own separate physical volume.
    Oracle database files, including all tablespaces, redo logs, undo ts, temp ts, and control files are in their own separate physical volume (that is made up of logical volumes that are each striped over at least 100 physical disks (spindles).
    The question is if it makes any sense (and WHY) to break up the physical volume that is used for the Oracle database files themselves, into multiple logical volumes? At what point does it make sense to create individual logical volumes for each datafile, or type, or put them all in a single logical volume?
    Does this do anything at all for performance? If the volumes are logical, then what difference would it to put them into individual logical volumes that are striped across the same one-hundred (+) disks?
    Basically ALL database files are in a single physical volume (LUN), but does it help (and WHY) to break up the physical volume into several logical volumes for placing each of the individual data files (e.g., separating system ts, from sysaux, from temp, from undo, from data, from indexes, etc.) if the physical volume is created on a RAID-5 (or RAID-10) disk array on a SAN that literally spans across hundreds of high-speed disks?
    If this does makes sense, why?
    From a physical standpoint, there are only 4 hardware paths for each LUN, so what difference does it make to create multiple 'logical' volumes for each datafile, or for separating types of data files?
    From an I/O standpoint, the multi-threading of the operating system should only be able to use the number of pathways that are capable based on the various operating system options (e.g., multicore CPUs using SMT (simultaneous multipath threading). But I believe they are still based on physical paths, not based on logical volumes.
    I look forward to hearing back from you.
    Thanks.
    ji li

    Thanks for your reply damorgan.
    We have dual HBAs in our servers as standard equipment, along with dual controllers.
    I totally agree with the idea of getting rid of RAID-5, but that is not my choice.
    We have a very large (massive) data center and the decision to use RAID-5 was at the discretion of our unix team some time ago. Their idea is one-size-fits-all. When I questioned it, I was balked at. After all, what do I know? I've only been a sys admin for 10 years (but on HP-UX and Solaris, not on AIX), and I've only been an Oracle DBA for nearly 20 years.
    For whatever it is worth, they also mirror their RAID-5, so in essence, it is a RAID 5-1-0 (RAID-50).
    Anyway, as for the hardware paths, from my understanding, there are only 4 physical hardware paths going from the servers to the switches, to the SAN and back. Their claim (the unix team's) is that by using multiple logical volumes within a single physical volume, that it increases the number of 'threads' to pull data from the stripe. This is the part I don't understand and may be specific to AIX.
    So if each logical volume is a stripe within a physical volume, and each physical volume is striped across more than one hundred disks, I still don't understand how multiple logical volumes can increase I/O through-put. From my understanding, if we only have four paths, and there are 100+ spindles, even if it did increase I/O somehow by the way AIX uses multipathing (SMT) with its CPUs, how can it have any affect on the I/O. And if it did, it would still have to be negligible.
    Two years ago, I've personally set up three LUNs on a pair of Sun V480s (RAC'd) connected to a Sun Storage 3510 SAN. One LUN for Oracle binaries, one for database datafiles, and one for backups and archivelogs), and then put all my datafiles in a single logical volume on one LUN, and had fantastic performance for a very intense database that literally had 12,000 to 16,000 simultaneous active* connections using Webshere connection pools. While that was a Sun system, and now I'm dealing with an AIX P6 570 system, I can't imagine the concepts being that much different, especially when the servers are basically comparable.
    Any comments or feedback appreciated.
    ji li
    Edited by: ji li on Jan 28, 2013 7:51 AM

  • IMac or Macbook Pro Can not Decide Need Help

    The iMac or Mac Pro I can't decide which is the best one to get and I need to get a
    good answer before next Thursday. The 24" iMac with extreme Intel processor or Macbook Pro 17" with 2.6 Intel Processor. I'm into Graphic Design in the winter months and want something that doesn't crash when I'm designing websites with dreamweaver. I need to know from you professionals which is better. Also this will be my first MAC. Thanks for any help you can provide.

    Newbie,
    It really comes down to your desire for a fast Desktop, or a fast portable. Only you can decide whether or not you want to "go portable." Either one would suite your professional needs, and do so extremely well. And Macs simply don't just "crash" (not if they are working as they should). My well-over-1-year-old 17" Macbook Pro has never been shut down, and I rarely restart it. No crashes, no problems. I have seen my wife's 3-year-old iBook with an "uptime" of over 160 days (that's almost 6 months with no shutdown or restart!!!). Also no crashes, no problems.
    The 24" iMac is simply... stunning. And, with the 2.8 Gig Core2Duo Extreme, it should really rock speed-wise. But, it would remain in one place. The benefits of the iMacs are (obviously) the larger displays, larger hard drive capacities, and sometimes better performance (as in the case of the 24" w/ Core2Duo Extreme).
    On the other hand, there's a lot to be said about being portable. "Going portable" is potentially a complete paradigm shift, as far as how one works. My Macbook Pro goes with me almost all of the time, and I can use it for work or play at a moment's notice. I have a really nice dual-display setup at home, which has (sadly) never been used since I moved to a portable; I simply can't be bothered being tied to a single location to use a computer anymore.
    In the end, the decision is yours. Either way, you'll be happy.
    Scott

  • N8 Anna browser differences / issues?

    A couple of things I've noticed since the Anna upgrade, WRT the browser.
    1. When entering text in text windows for some forum posts (it actually works correctly on this forum, but on others I read / post to) the newline button doesn't add a line break. It did using the browser in PR1.2, can't remember the behaviour in PR1.1.
    2. When using the PR1.2 browser, I used to double tap on the screen to zoom, which worked very well. This seems to work quite differently in Anna.
    Both are notable issues / irritations for me. I suspect 1 is a big / issue - whether 2 is expected or planned behaviour, or not, though I'm unsure.
    Truth be told, I've not found anything hugely positive about the upgrade to Anna - all it's brought me is irritations in the differences or things that work differently. Performance and stability seem the same to me - which is to say that it's been good before and after Anna. The difference in look and feel just makes me feel meh. I don't mean to be overly critical, I kind of expected more from it, really.
    My N8 (admittedly this is my 2nd - first encountered the bug where it won't charge, nor turn on) has been well behaved, performs well and is stable. Main irritations have been with things like WiFi access being, well, unreliable and flakey - and seemed to be best served with scanning turned off, and WiFi power saving on. Post Anna, that hasn't improved, WiFi is still flakey, and not fully reliable (and there is context, there, I've got plenty of other WiFi devices that are stable, including other mobile phones).
    Photo viewer no longer picks up on all my photos any more since Anna (I'd have to move other folders, that worked just fine before Anna). Browser I've not found any benefits, and merely negatives. Portrait qwerty  means nothing to me, as I find the landscape one tricky enough to get key presses accepted accurately. Video player seems no better (still no bookmark facility).
    3.06 of maps seems the golden release - every beta since has been awful, one way or another - too much removed or configuration options removed, and the app dumbed down (3.08 betas). I've tried 3.07 and both 3.08 releases and in all 3 cases have reverted to 3.06 once I became sufficiently irritated that functionality that I wanted or wanted to configure better had either been removed, or was no longer configurable.
    Not sure what's going on with N8 software, but I'm considering imposing a software freeze on myself, because it seems that newer releases are doing nothing more than irritating me in not addressing things that need addressing, and merely tweaking look and feel, and dumbing everything down to make it less of a well usable device.

    laffcarl wrote:
    Well put
    Another Browser issue entering text into forums, that has stopped me using one of the only sites I used to frequent before 'AnnaUpdate!', has the strange effect of sending a line of 'Question marks and maybe the odd square box? (Have tried all font settings)
    Also the ?'s and square boxes don't add up to the characters in the message i.e.. a 50 character message will be sent as may be 10 ?'s or sometimes just a few. The annoying thing about it is, it looks fine before and as you send it, but when you look at the message you have sent it's as described and rather sinister looking for the recipient lol.
    One step Forward-Three steps Back!...dont even start going on about the flash player lmho...What I need is to be able to butcher the N8's camera into my old N95 then there would be no problems
    £400 odd quid for a 12 meg nokia camera, you can get an SLR for that money....But hey wait..."You can't browse the web with an SLR!!"...Oh it's OK, you can't browse it with Nokias Flagship either lmao
    *Shakes head in disbelief*
    Here's the thing - I'm not a hater - I'm not just looking to bash Nokia. Personally, I'm drawn to their handsets because for me, OVI / Nokia maps is a killer app for me.
    All the same, though, and it does really irritate - key things people have been complaining about, still not fixed. Basic functions that people have been wanting from various normal apps not implemented. Yet Anna looks a bit more funky.
    Same with the mapping betas - well actually worse - 3.06 for me seems to work very well - practically all the key things working quite well. Not perfect, and certainly could be enhanced.
    But the recent betas have been awful - ripped out functionality, removed configuration options, dumbed down functions - but again, it looks more funky. But it all just leaves me with a big "So what?". The newer versions simply don't work as well, are not as capable, and all with some dogmatic opinions expressed as the rationale behind why some things have been removed, some options removed, and some bugs not addressed.
    If they'd just focus on getting the stuff that doesn't work well, to work, in the process not remove good features or configuration options that many will want to use, THEN - if they have the time - update the visuals, then fine. But I'm sick and tired with the focus on form over function - it's making me question my entire reason for sticking with Nokia handsets - their mapping app - and I don't want to be told that mapping on phones is due a paradigm shift - I just want it to work well, with the maps updatable and relevant.

  • Traversing in a loop weekly

    Hi,
    I have a loop in which I would like to Select query on weekly basis starting from the last date, essentially traversing backwards week-by-week.  For example the table below represents the sample data for the query within loop, without WHERE
    clause.  What I like to get from the first iteration, say 5 years length backwards from the last date, which is 16/05/2014.  In the second iteration the last date would be 08/05/2014, because that is the next backward week date and again for
    5 years period from that date.
    II am not proficient with SQL especially when it comes to dates and I am looking for the neat approach as I have thousands of iterations and performance is the important factor here.
    Thanks

    If you don't need a massive table (centuries and centuries of dates, just a few years), you could use master..spt_values, but keep in mind the table I used only goes up to 2048, and you'll have creates something with a built in limit that you might reach
    someday. 
    There's also nothing wrong with using this table to CREATE permanent versions of the tables either (so you use it once, at table creation time, rather than over and over).  You can also join it to itself a time or two and use row_number() for the numbering
    too.  There are 100 creative ways to achieve the same thing, as you've seen.  As long as they aren't giving wrong sequence numbers, pick whichever method you like best and that meets your needs.  (Most of the many articles you'll find are probably
    correct also).
    Most to the point though, I suggest making a permanent version of the calendar table (numbers table too), because once a person starts using a Numbers table and a Calendar table, they find lots of other reasons to use them that they hadn't thought of before
    (or that they were doing a different way), and since you now are in that category, make yours permanent too!
    EDIT: Also, there's a little paradigm shift taking place too: a natural reaction to creating a calendar table or numbers table with way more numbers than you need at the moment, is that it's wasteful.  Ok, don't create one so massive that it needs its
    own dedicated server (unless you really need it), but remember, as a relational database, handling data in sets is what it does best, so don't worry too much about creating one that seems too big.  The overall benefit to you and your organization (standardization),
    combined with the most likely outcome (no discernible performance difference, possibly even faster than create plus use on the fly) says just make the permanent table.

Maybe you are looking for

  • How to save a file in client system?

    I have a requiremnet like this: "When the user selects a file from his local drive (say c:\), I have to upload the file to a LAN connected to his machine." I have written the TestUpload class like below. It will work if my server is on the same machi

  • ITunes Vista 64 bit --- display errors

    I have recently run into a problem with iTunes 8.2 for Vista 64 bit. When the window is not maximized, or close to taking up my entire screen, podcast and album covers, icons for the 'GET" for podcast, the icons for closing/max/min in the top right c

  • Illustrator CS2 won't open - PLEASE help!

    I have been using Adobe Illustrator CS2 for many years now and never had a problem.. I tried opening Illustrator CS2 today and after waiting an unusually long amount of time for the application to open, an error message appeared that said "A serious

  • No Sound On Exported Keynote Movies

    When I export recorded presentations as movies they come through with no sound. The presentation plays normally with the correct recorded timing but there is no narration. I know the narration got recorded because I can play it back in Keynote and he

  • Menus are not appearing in DRM

    Hi, I am facing some unknown issue but I am not able to see any menus in DRM. Please someone can help. Thanks,